Researchers Take a Step Towards Light-Based, Brain-Like Computing Chip


Reading time ( words)

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched—for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations.

Image Caption: Schematic illustration of a light-based, brain-inspired chip. By mimicking biological neuronal systems, photonic neuromorphic processors provide a promising platform to tackle challenges in machine learning and pattern recognition.

One of the reasons for this is that a computer traditionally has separate memory and processor units—the consequence of which is that all data have to be sent back and forth between the two.

In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place—in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns—just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.”

The Story in Detail—Background and Method Used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems—in which photons, i.e. light particles, are used—are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials—which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline—when their atoms arrange themselves in a regular fashion—or amorphous—when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

chip2.jpgThe optical microchips that the researchers are working on developing are about the size of a one-cent piece.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project 'Fun-COMP' by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Share

Print


Suggested Items

DARPA Tests Advanced Chemical Sensors

05/03/2019 | DARPA
DARPA’s SIGMA program, which began in 2014, has demonstrated a city-scale capability for detecting radiological and nuclear threats that is now being operationally deployed.

Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning

05/07/2018 | DARPA
Machine learning (ML) and artificial intelligence (AI) systems have significantly advanced in recent years. However, they are currently limited to executing only those tasks they are specifically designed to perform and are unable to adapt when encountering situations outside their programming or training.

DARPA's Assured Autonomy Program Seeks to Guarantee Safety of Learning-enabled Autonomous Systems

08/17/2017 | DARPA
Building on recent breakthroughs in autonomous cyber systems and formal methods, DARPA today announced a new research program called Assured Autonomy that aims to advance the ways computing systems can learn and evolve to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles and unmanned aerial vehicles (UAVs).



Copyright © 2019 I-Connect007. All rights reserved.