Optical data corporation/windows on science




















In many cases, such as optical coatings or simple lenses, windows, mirrors, and prisms, our company has delivered finished products in days and not weeks. The optical professionals will provide contract manufacturing services from the most simple lens barrel to a fully integrated optical sub-assembly. This service is dependent upon the complexity of the product, availability of raw material, and also may be dependent upon tooling.

For more than 4 decades, we're proud to serve customers across a wide array of scientific and industrial applications, including:.

OPCO provided the optics for the Solar Occulation Diffusor, the optic that looks directly at the sun and coupled this light to a fiber optic input to the spectrometer. A single piece of Corning glass can be diced into many small pieces with reduced variation. Our large panel sizes with low TTV and high transmission enable accurate sensor detection.

Corning regularly produces glass compositions that match the most common CTEs for these applications. Our glass has superior surface quality and CTEs designed to match popular materials used in energy sensors. Excellent thickness tolerances and low total thickness variation within parts ensures consistency and reliability.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulse—you can wait until the end of a sequence of, say, N pulses.

That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large.

Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy. Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple times—consuming energy each time—it can be transformed just once, and the light beam that is created can be split into many channels.

In this way, the energy cost of input conversion is amortized over many operations. Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements. I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat.

Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence , are developing optical neural-network accelerators based on this approach.

Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year. Another startup using optics for computing is Optalysis , which hopes to revive a rather old concept. One of the first uses of optical computing back in the s was for the processing of synthetic-aperture radar data.

A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

There is also a company called Luminous , spun out of Princeton University , which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approaches—spiking and optics—is quite exciting. There are, of course, still many technical challenges to be overcome.

One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision.

While 8-bit electronic deep-learning hardware exists the Google TPU is a good example , this industry demands higher precision, especially for neural-network training. There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly.

A demonstration of this approach by MIT researchers involved a chip that was 1. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way. There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug.

What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude. Based on the technology that's currently available for the various components optical modulators, detectors, amplifiers, analog-to-digital converters , it's reasonable to think that the energy efficiency of neural-network calculations could be made 1, times better than today's electronic processors.

Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks , were first demonstrated in the s.

But this approach didn't catch on. Will this time be different? Possibly, for three reasons. Batteries Specific Power Banks. Office Products Shredders. Print Servers Wireless Ethernet. Tape Backup External Tape Drives. Miscellaneous Drives Data Cartridge Drives. Change Location. Americas argentina. Argentina English. Brazil English. Canada English French. Chile English. Colombia English.



0コメント

  • 1000 / 1000