Scientists Figured Out How to Make Neural Networks 90 Percent Smaller

Neural Networks: A pair of MIT researchers have discovered a way to build artificial intelligence that is only one-tenth the size without losing any computational ability. This development could allow other researchers to create AI that is smaller, faster, and as smart as it is today.

Read More: Boston Dynamics’ New Robot Is Ludicrous 2022

When people talk about artificial intelligence, they often refer to a class of computer programs called artificial neural networks. These programs are designed to mimic how our own brains work, making them very intelligent and creative. They can identify the content of images, defeat humans in abstract strategy games, and even drive themselves.

At their core, programs are made up of sets of ‘neurons’ like our own brains. These neurons are connected to a random number of other neurons. Each individual neuron can perform only a handful of basic calculations, but connected to many of them, the computational power of the network is basically unlimited.

The most important thing for a solid neural network is communication between neurons. Good connections make for a good network, but bad connections leave you with nothing but junk. The process of making these connections is called training, and that’s what our own brains do when we learn something new.

Scientists Figured Out How to Make Neural Networks 90 Percent Smaller

The only difference? Our brains regularly carve out old connections that no longer work, in a process called ‘proving’. We disconnect old or broken connections all the time, but most artificial neural networks only disconnect once, at the very end of training.

So MIT researchers decided to try something new: cut the network regularly during training. They found that the neural networks developed in this way were as good as the networks trained by the standard procedure, but that these cut-off networks were about 90% smaller and much more efficient. They also needed less training time and they were more accurate.

In the near future, researchers could use this pruning method to design even better neural networks. These networks can be powerful and lightweight, so people can use them with small electronic devices. And over time, we may have neural networks operating almost everywhere.


Hello My self Emelia , I'm a Technology & Gaming Guides Expert. OR Also Providing Gaming Guides For Public information.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button