Google uses AI to design a processor that runs AI more efficiently

 

Google has a large number of computing hardware

Engineers on Google have assigned artificial intelligence by designing processors that are faster and more efficiently - and then use the chip design to develop the next generation of special computers running the same type of AI algorithm.


Google operates on a large scale that designs its own computer chip rather than buying commercial products. This allows to optimize the chip to run its own software, but the process takes time and is expensive. A special chip usually takes two to three years to develop.


One stage of the chip design is a process called FloorPlanning, which involves taking the final circuit diagram of the new chip and set millions of components into an efficient layout for making. Although the functional design chip is complete at this time, the layout can have a great effect on the speed and power consumption. For chips in smartphones, priority is possible to reduce power consumption to increase battery life, but for data centers, it might be more important to maximize speed.



The previous floorplanning has become a very manual and time consuming task, said Anna Goldie on Google. The team will divide the larger chip into the block and work on parts in parallel, tweakling to find small improvements, he said.


But Goldie and his colleagues have now created software that converted floor problems into tasks for neural networks. It treats empty chips and millions of components as a complex jigsaw with a large number of possible solutions. The aim is to optimize any parameters that engineers decided are the most important, while also placing all components and connections including accurately.


This software starts with developing randomly tested solutions for performance and efficiency by separate algorithms and then feeding back to the first. In this way, gradually studying what strategies are effective and built based on past success. "It starts from random type and gets a very bad placement, but after thousands of iterations it becomes very good and fast," Goldie said.

Team software results in layout for chips in less than 6 hours which is comparable or superior to those produced by humans for several months in terms of power consumption, performance and chip density. The existing software tools are called changing design at similar speeds falling short of humans and AI on all calculated in the test.


The chip design used in the experiment is the latest version of the Google Tensor Processing Unit (TPU), which is designed to run the exact type of neural network algorithm for use in company search engines and automatic translation tools. It is conceivable that the new AI designed chip will be used in the future to design a replacement, and that the successor in turn will be used to design his own successor.


The team believes that the same neural network approach can be applied at various stages of other chip designs, slapping the overall design time from year to day. The company aims to switch quickly because even small repairs in speed consumption or power can make a big difference on the large scale operated.


"There are high opportunity costs in not releasing the next generation. Say new is more powerful. The impact level that can be owned on the carbon footprint of machine learning, considering it is deployed in all kinds of different data centers, truly valuable. Even one day before, It makes a big difference, "Goldie said.

Comments

Popular posts from this blog

The eternal heritage of the treasure island

What is the difference between deep web and dark web?

What is the difference between emojis and emoticons?