Cut, clarity, computer?
Diamonds are forever…or, at least, the effects of one particular diamond on quantum computing may be, according to a team that includes scientists from USC that built a quantum computer in a diamond.
The diamond is the first of its kind to include protection against decoherence—noise that prevents a computer from functioning properly. A demonstration of the technology showed the viability of solid-state quantum computers, which unlike earlier gas- and liquid-state systems, may represent the future of quantum computing because they can easily be scaled up in size. Comparatively, current quantum computers typically are very small but cannot yet compete with the speed of larger, traditional computers.
The multinational team included University of Southern California (USC) professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara.
The diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles. As opposed to traditional computer bits that can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers, researchers said.
Solid-state computing systems have existed before but the team said this was the first to incorporate decoherence protection: using microwave pulses to continually switch the direction of the electron spin rotation. “It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position. The team demonstrated its diamond-encased system did indeed operate in a quantum fashion by seeing how closely it matched “Grover’s algorithm.”
Preventing Simulator Deadlock
With chips of the future likely to have hundreds or even thousands of cores, predicting how these massively multicore chips will behave is no easy task. While software simulations work up to a point more accurate simulations typically require hardware models: programmable chips that can be reconfigured to mimic the behavior of multicore chips.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently presented a new method to do just that: improve the efficiency of hardware simulations of multicore chips and guarantee that the simulator won’t go into “deadlock”—a state in which cores get stuck waiting for each other to relinquish system resources, such as memory.
This method is expected to make it easier for designers to develop simulations and for outside observers to understand what those simulations are intended to do.
Hardware simulations of multicore chips typically use FPGAs. However, chip architects using FPGAs to test multicore-chip designs must simulate the complex circuitry found in general-purpose microprocessors either by hooking together a lot of FPGAs but only modeling only a small portion of the whole chip design or by simulating the circuit behavior in stages, which is extremely slow.
Graduate students Asif Khan and Muralidaran Vijayaraghavan; their adviser, Arvind, the Charles W. and Jennifer C. Johnson Professor of Electrical Engineering and Computer Science; and Silas Boyd-Wickizer, a CSAIL graduate student in the Parallel and Distributed Operating Systems Group, adopted the second approach for a simulation system they’ve dubbed “Arete.” However, the system uses a circuit design they developed that allows the ratio between real clock cycles and simulated cycles to fluctuate as needed thereby allowing for faster simulations and more economical use of the FPGA circuitry.
One advantage of their system, the CSAIL researchers said, is that it makes it easier for outside observers—and even for chip designers themselves—to understand what a simulation is intended to do. With other researchers’ simulators, it’s often the case that “the cycle-level specification for the machine that they’re modeling is in their heads,” Khan says. “What we’re proposing is, instead of having this in your head, let’s start with a specification. Let’s write it down formally, but in a language that is at a very high level of abstraction so it does not require you to write a lot of details. And once you have this specification that clearly tells you how the entire multicore model is going to behave every cycle, you can transform this automatically into an efficient mapping on the FPGA.”
The researchers’ high-level language, which they dubbed StructuralSpec, builds on the BlueSpec hardware design language that Arvind’s group helped develop in the late 1990s and early 2000s. The StructuralSpec user gives a high-level specification of a multicore model, and software spits out the code that implements that model on an FPGA. Where a typical, hand-coded hardware model might have about 30,000 lines of code, Khan says, a similar model implemented on StructuralSpec might have only 8,000 lines of code.
–Ann Mutschler