A PHP Error was encountered

Severity: 8192

Message: Return type of LayerShifter\TLDExtract\Result::offsetExists($offset) should either be compatible with ArrayAccess::offsetExists(mixed $offset): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice

Filename: src/Result.php

Line Number: 189

Backtrace:

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/Result.php
Line: 27
Function: _error_handler

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/composer/ClassLoader.php
Line: 444
Function: include

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/composer/ClassLoader.php
Line: 322
Function: Composer\Autoload\includeFile

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/Extract.php
Line: 167
Function: loadClass

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/static.php
Line: 35
Function: parse

File: /home/u149759368/domains/techthop.com/public_html/index.php
Line: 331
Function: require_once

Artificial intelligence can now be computed faster with less energy using new hardware

Science

Artificial intelligence can now be computed faster with less energy using new hardware


By TechThop Team

Posted on: 29 Jul, 2022

The amount of time, effort, and money needed to train ever-complex neural network models is soaring as researchers push the limits of machine learning. Artificial intelligence's analog deep learning promises faster computation and lolowersnergy consumption.

In the journal 'Science', the researchers published their findings. Analog deep learning relies on programmable resistors, just like digital deep learning relies on transistors. A network of analog artificial 'neurons' and 'synapses' can be created by repeating arrays of programmable resistors in complex layers.

As a result, this network can be trained to perform complex AI tasks such as image recognition and natural language processing. MIT researchers set out to push the speed limits of a human-made analog synapse that they had previously developed. Their devices run 1 million times faster than previous versions, which is also about 1 million times faster than human synapses.

Furthermore, this inorganic material makes the resistor extremely energy-efficient. The new material is compatible with silicon fabrication techniques, unlike materials used previously. Devices at the nanometer scale are now possible, and deep-learning applications could be integrated into commercial computing hardware.

By combining that key insight with the powerful nanofabrication techniques at MIT.nano, we have demonstrated that these devices are intrinsically very fast and operate at reasonable voltages', explained senior author Jesus del Alamo, the Donner Professor in MIT's Department of Electrical Engineering and Computer Science (EECS). 'These devices now appear promising for future applications.'

'The device works by electrochemically injecting the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. By using a strong electric field, we could accelerate the motion of this ion and push these ionic devices to a nanosecond operation regime,' said senior author Bilge Yildiz, the Breene M. Kerr Professor of Nuclear Science and Engineering.

'Action potentials in biological cells rise and fall with a timescale of milliseconds because the voltage difference of 0.1 volts is constrained by the stability of water,' said Ju Li, senior author of the study, 'Here we apply up to 10 volts across a solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.'

By using these programmable resistors, neural networks can be trained more quickly while reducing the cost and energy involved. Scientists could develop deep learning models much more quickly, which could then be used for things like self-driving cars, fraud detection, or medical image analysis.

'With an analog processor, you won't just train networks everyone else does, you'll train networks with unprecedented complexity that no one else can afford.

This is not a faster car, it's a spaceship,' added lead author and MIT postdoc Murat One.

source: aninews

For more stories like this

Explore our website

TAP FOR MORE