New Delhi: A glove that can feel vegetables
and tell you which ones are the best to pick,
a robot that can detect dangerous pollutants,
glasses that can help the sightless “see”—
these are just some of the applications that
could come to the market from work being
done by scientists and companies on
neuromorphic chips.
Neuromorphic engineering entails looking at
biology for inspiration to grasp the
mechanisms that go into learning, memory
and computation, and transferring this
understanding to new hardware devices and
circuits.
The hope is that these devices will be able to
adapt and respond to physical environments
the way biological organisms do. Around the
world, scientists are trying to build
computing chips inspired by the biology of
living things, from worms to cats; the
ultimate quest is to make a chip modelled on
the human brain.
“If you see a supercomputer, it can perform
huge computations, within a second, that the
brain can’t do,” said Tapan Nayak, research
staff member at International Business
Machines Corp. ’s (IBM’s) India Research Lab
in New Delhi. “But there are many problems,
from the perception point of view or
cognition, where the human brain is much
more superior than the best
supercomputers.”
For the mind’s eye to capture an image, it
receives data from multiple senses and the
brain integrates these signals simultaneously
and processes them in real time, Nayak said.
“What we did in the last 40 years is that we
tried to improve the efficiency of the
processing chip, and we improved the speed.
We have multiple processors integrated to get
speed, but ultimately there is no change in
architecture,” he said. “It is then that it was
decided that we have to change the
architecture, and that is why the human brain
is able to perform so efficiently and the
supercomputer isn’t.”
In 2009, IBM published a series of papers
titled Cat’s out of the bag in which its
researchers detailed their success in
simulating a cat’s brain. They had already
simulated a mouse’s brain before that. Under
IBM’s SyNAPSE (Systems of Neuromorphic
Adaptive Plastic Scalable Electronics) project,
the next big step is to make artificial human
brains, with support from the US Defense
Advanced Research Projects Agency (DARPA).
One of the aims of DARPA is to help amputee
soldiers from Iraq and Afghanistan by
developing prosthetic limbs that are
interfaced with the brain, enabling the
amputees to feel they have a real limb rather
than an artificial one.
IBM is making a neuromorphic chip that is
completely different in its architecture from
the Pentium chip. A major difference is that
the power used by a human brain to process
information is merely in watts, compared
with the huge number of megawatts that a
supercomputer consumes to process the
same information.
The first goal of SyNAPSE back in 2006 was to
simulate the human brain using software.
After two years, using supercomputers, they
could manage to simulate only 5%. After
looking at the resources and the power used,
the idea was dropped.
“So next the idea came that why don’t we go
for hardware and completely change the
architecture and put the intelligence in a
chip. That is why last three years, the goal is
to develop hardware neurons,” said Nayak.
The hardware chip they developed
accommodates millions of neurons that can
be programmed. Nayak explains that there is
a very simple algorithm that a neuron works
on which is emulated by the hardware
neuron.
IBM’s long-term goal is to build a
neurosynaptic chip system with 10 billion
neurons and 100 trillion synapses, all while
consuming only one kilowatt of power and
occupying less than two litres of volume. The
next project is to develop a language for the
chip so that ordinary people can use it.
“We are just providing the basic
infrastructure. The hardware team is putting
more and more neurons on the 2x2-inch
chip,” said Nayak.
In a nutshell, it is a piece of artificial brain
that can be used as a sensing, visual or
auditory device and doesn’t require any other
guiding system as it is self-dependent.
Efforts in India
Meanwhile, efforts are being made towards
this end in Indian institutes as well. At the
Indian Institute of Technology (IIT), Bombay,
students under the leadership of Bipin
Rajendran are developing a robot that can
detect traces of dangerous chemicals in an
unknown environment.
“I am working along with my students to
understand how computational tasks are
performed by biological systems. We are also
building nanoscale devices to mimic the
characteristics of neurons and synapses in the
brain,” said Rajendran, an assistant professor
in the department of electrical engineering at
IIT-Bombay.
“The ultimate hope for building
neuromorphic computers is that such systems
will be able to assist us in many complex
decision-making scenarios. For instance, IBM
has talked about developing cognitive
computing systems to help doctors diagnose
diseases. I believe such systems will also find
applications in autonomous navigation
systems, finance applications and other high
performance computing tasks,” he added.
The electrical engineering department at IIT-
Delhi developed a router chip inspired by the
biology of an ant. “If you leave a piece of
sugar, ants find the shortest path from one
point to another. Initially they meander all
over the place, but after a point they are
focused on one shortest path. They do this
without communicating with each other
directly,” said Jayadeva , professor in the
electrical engineering department at IIT-
Delhi, who uses only one name.
“The ants deposit pheromones on the path
that they are travelling on, and after a point
the shortest path has the highest pheromones
and ants start travelling on that path,” he
added.
Jayadeva and a team of students developed a
chip that could work as a router for
telecommunications networks based on their
ant colony optimization (ACO) algorithm that
could work as a router for
telecommunications networks based on a
mathematical model they had developed. An
ACO algorithm guarantees finding the shortest
path under arbitrary conditions.
“To the best of our knowledge, it’s the first
ant colony optimization-based design to go
on any chip in the world,” he said. “But with
ant colony optimization, you can decentralize
it, so that if a link fails in some path in
telecom network and suddenly becomes slow,
it finds routes without a central processor
having to figure out what’s the best one.”
The idea of neuromorphic engineering has
gained much attention, especially since the
start of two brain research projects in the
European Union and the US. One of the aims
of both projects is to gain enough knowledge
of the human brain to help engineers
simulate it more accurately. But
neuromorphic computing by itself is not a
new concept, having been introduced by Carl
Mead in the late 1980s.
“Earlier, it was trying to copy principles of
the nature blindly, but that is difficult since
the brain is three-dimensional and all our
designs are two-dimensional and we don’t
have that kind of integration; we are also a
lot in the dark about how exactly the brain
processes information,” said Jayadeva.
“We began by trying to understand learning
with models of biological neural networks,
but we have far more sophisticated systems
today based on a mathematical understanding
of learning. The neuromorphic approach has
found many good applications such as the
silicon retina and the electronic cochlea, but
the vast majority of applications seem to be
where we are able to extract basic principles,
build a simpler and a more tractable practical
model that can be analysed and used more
readily, and which is amenable to the vast
array of tools in engineering and
mathematics,” he added.
0 comments:
Post a Comment