https://techxplore.com/news/2022-01-harnessing-noise-optical-ai.html


JANUARY 21, 2022

Harnessing noise in optical computing for AI

by University of Washington

Artificial Intelligence
Credit: CC0 Public Domain

Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify.

In the near future, it’s predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries.

But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months. And cloud computing data centers used by AI and machine learning applications worldwide are already devouring more electrical power per year than some small countries. It’s easy to see that this level of energy consumption is unsustainable.

A research team led by the University of Washington has developed new optical computing hardware for AI and machine learning that is faster and much more energy efficient than conventional electronics. The research also addresses another challenge—the ‘noise‘ inherent to optical computing that can interfere with computing precision.

In a new paper, published Jan. 21 in Science Advances, the team demonstrates an optical computing system for AI and machine learning that not only mitigates this noise but actually uses some of it as input to help enhance the creative output of the artificial neural network within the system.

“We’ve built an optical computer that is faster than a conventional digital computer,” said lead author Changming Wu, a UW doctoral student in electrical and computer engineering. “And also, this optical computer can create new things based on random inputs generated from the optical noise that most researchers tried to evade.”

Optical computing noise essentially comes from stray light particles, or photons, that originate from the operation of lasers within the device and background thermal radiation. To target noise, the researchers connected their optical computing core to a special type of machine learning network, called a Generative Adversarial Network.https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=280&slotname=8459827939&adk=582299054&adf=2631371385&pi=t.ma~as.8459827939&w=750&fwrn=4&fwrnh=100&lmt=1642890490&rafmt=1&psa=1&format=750×280&url=https%3A%2F%2Ftechxplore.com%2Fnews%2F2022-01-harnessing-noise-optical-ai.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&uach=WyJtYWNPUyIsIjEwLjExLjYiLCJ4ODYiLCIiLCI5Ny4wLjQ2OTIuNzEiLFtdLG51bGwsbnVsbCwiNjQiXQ..&dt=1642890489770&bpp=30&bdt=2272&idt=543&shv=r20220119&mjsv=m202201120101&ptt=9&saldr=aa&abxe=1&cookie=ID%3D3e601f936282f656-22362713e1cc00be%3AT%3D1639362255%3AS%3DALNI_MYO_Ci2uEL3onPR8J02ZYsa7bookA&correlator=7647665012995&frm=20&pv=2&ga_vid=836539722.1605309732&ga_sid=1642890490&ga_hid=869669293&ga_fc=1&ga_wpids=UA-73855-17&u_tz=-480&u_his=1&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_sd=1&dmc=2&adx=333&ady=1910&biw=1676&bih=900&scr_x=0&scr_y=0&eid=31063752%2C44750773%2C182982100%2C182982300%2C31063938%2C31064183%2C21065725%2C21067496%2C31062930&oid=2&pvsid=2005096710057350&pem=171&tmod=417816975&nvt=1&ref=https%3A%2F%2Fnews.google.com%2F&eae=0&fc=896&brdim=3%2C23%2C3%2C23%2C1680%2C23%2C1676%2C980%2C1676%2C900&vis=1&rsz=%7C%7ClEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=cPk9oengMm&p=https%3A//techxplore.com&dtd=583

The team tested several noise mitigation techniques, which included using some of the noise generated by the optical computing core to serve as random inputs for the GAN.

For example, the team assigned the GAN the task of learning how to handwrite the number “7” like a person would. The optical computer could not simply print out the number according to a prescribed font. It had to learn the task much like a child would, by looking at visual samples of handwriting and practicing until it could write the number correctly. Of course the optical computer didn’t have a human hand for writing, so its form of “handwriting” was to generate digital images that had a style similar to the samples it had studied, but were not identical to them.

“Instead of training the network to read handwritten numbers, we trained the network to learn to write numbers, mimicking visual samples of handwriting that it was trained on,” said senior author Mo Li, a UW professor of electrical and computer engineering. “We, with the help of our computer science collaborators at Duke University, also showed that the GAN can mitigate the negative impact of the optical computing hardware noises by using a training algorithm that is robust to errors and noises. More than that, the network actually uses the noises as random input that is needed to generate output instances.”

After learning from handwritten samples of the number seven, which were from a standard AI-training image set, the GAN practiced writing “7” until it could do it successfully. Along the way, it developed its own distinct writing style and could write numbers from one to 10 in computer simulations.

The next steps include building this device at a larger scale using current semiconductor manufacturing technology. So, instead of constructing the next version of the device in a lab, the team plans to use an industrial semiconductor foundry to achieve wafer-scale technology. A larger-scale device will further improve performance and allow the research team to do more complex tasks beyond handwriting generation such as creating artwork and even videos.

“This optical system represents a computer hardware architecture that can enhance the creativity of artificial neural networks used in AI and machine learning, but more importantly, it demonstrates the viability for this system at a large scale where noise and errors can be mitigated and even harnessed,” Li said. “AI applications are growing so fast that in the future, their energy consumption will be unsustainable. This technology has the potential to help reduce that energy consumption, making AI and machine learning environmentally sustainable—and very fast, achieving higher performance overall.”


Explore furtherAccelerating AI computing to the speed of light


More information: Changming Wu et al, Harnessing optoelectronic noises in a photonic generative network, Science Advances (2022). DOI: 10.1126/sciadv.abm2956www.science.org/doi/10.1126/sciadv.abm2956Journal information:Science AdvancesProvided by University of Washington

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s