Abstract: The exponential, Moore's Law, progress of electronics may be continued beyond the 10‐nm frontier if the currently dominant CMOS technology is replaced by hybrid CMOL circuits combining a silicon MOSFET stack and a few layers of parallel nanowires connected by self‐assembled molecular electronic devices. Such hybrids promise unparalleled performance for advanced information processing, but require special architectures to compensate for specific features of the molecular devices, including low voltage gain and possible high fraction of faulty components. Neuromorphic networks with their defect tolerance seem the most natural way to address these problems. Such circuits may be trained to perform advanced information processing including (at least) effective pattern recognition and classification. We are developing a family of distributed crossbar network (CrossNet) architectures that permit the combination of high connectivity neuromorphic circuits with high component density. Preliminary estimates show that this approach may eventually allow us to place a cortex‐scale circuit with about 1010 neurons and about 1014 synapses on an approximately 10 × 10 cm2 silicon wafer. Such systems may provide an average cell‐to‐cell latency of about 20 nsec and, thus, perform information processing and system training (possibly including self‐evolution after initial training) at a speed that is approximately six orders of magnitude higher than in its biological prototype and at acceptable power dissipation.