Thursday, January 24, 2019

Notes on Quantum Complexity Theory: Part 1/6

Note: Often in complexity study, I found the term 'conjecture'. It is defined as: "A conjecture is a conclusion or proposition based on incomplete information, for which no proof has been found."

The context of the article is for the circuit model of quantum computation whereby, the qubits are represented by wires and gates are quantum operators.

Quantum Complexity Class
A collection of problems that can be solved by a certain quantum computational model that obeys certain resource (e.g. time) constraints.
E.g. BQP: A quantum complexity class of all decision problems that can be solved in polynomial time by a quantum computer.

Quantum Proof: A quantum proof is a quantum state that acts as witness to a quantum computer that runs verification procedure.
The complexity class QMA is defined by this notion.
QMA contains all the decision problems whose YES-instances are efficiently verifiable by quantum proofs?

Quantum Interactive Proof System: In this system, there is a verifier quantum computer and one or more (quantum state) provers. The system involves processing and exchange of the quantum information between the provers and verifier. The provers attempt to convince the verifier of the answer to some computational problem. (to converge to a solution?)

Computational Complexity: Studies hardness of a particular computational problem. Formalized in terms of resources required by a deterministic Turing Machine. To consider:
- Considered Models: deterministic, non-deterministic , probabilistic.
- Resources: time, space
- Interaction among models of different abilities. (example? Many interesting relationships among models and resource constraints are known. One common feature is that they are physically motivated.
An example is the class of 
Polynomial Time Computable Functions. It is a mathematical abstraction of a class of functions that can be efficiently computed without errors, on a physical quantum device.)

This train of thought motivates us to think that complexity theory (and computation) are motivated by physical theories; and Quantum Mechanics is a viable candidate for the same.
Why Quantum Mechanics: VLSI feature size is ⬇️, limit is the atomic scale. At atomic scales, quantum phenomena becomes relevant.

How Quantum Mechanics affects complexity theory: Initial definitions laid down by Bernstein and Vazirani (Quantum Complexity Theory[Bernstein-Vazirani1][Bernstein-VaziraniV2]). Shor's Algorithm [Shor1] sheds light into differences in computational hardness notion in classical and quantum subroutines. Another area of quantum complexity-theoretic application: Efficient verification of Quantum proofs (2018 result by Urmila Mahadev↗ is an important result).

Quantum Complexity Theory studies:
- implications of quantum physics to computational complexity theory.
- hardness of computational problems with respect to models of quantum computation.
- how to classify the 'computational problems' based on these models. (the complexity will be different for different models of computation. The models, here, means the models of computation, right? The physical motivation factor, if considered, would it imply that complexity for different qubit-realization technology would differ (even though the model of computation is same)? Speaking of which, quantum computing has 2 models for computation, right? - the circuit model and the adiabatic model? Is topological quantum computing a different model?)

These are complete notes based on:
John Watrous' article, Quantum Computational Complexity (2008)
arXiv: 0804.3401v1

Up Next:
Part 2: Quantum Complexity Classes and Quantum Circuit Model
Part 3: Polynomial Time Quantum Computations
Part 4: Quantum Proofs
Part 5: Quantum Interactive Proof Systems
Part 6: Further Topics: Quantum Advice, Space Bound Quantum Computation, Bounded Depth Quantum Circuits

Tuesday, December 4, 2018

A case for Si-Spin Qubits

The idea for this blog post is to present a strong case for Si-Spin Qubits as a viable candidate for long-term circuit-based model for quantum computing. There are many factual information presented with respect to Spin Qubits.

For a technology to be accepted as capable of performing quantum computation, it must satisfy the Di-Vincenzo's Criteria i.e. it must be/have:
  • A scalable physical system with well characterized qubits.
  • The ability to initialize the states of the qubits to a simple state, such as |000⟩.
  • Long relevant coherence times, much longer than the gate operation time.
  • A “universal” set of quantum gates.
  • A qubit-specific measurement capability.
In case of Silicon-Spin Qubits in Quantum Dots, the electron spin-state in quantum dots is described by the Fermi-Hubbard model (Hamiltonian). The initialisation and measurement of the same is performed by Elzerman method and the universal set of quantum gates can be performed by applying microwave pulses to these qubits.

The Spin-qubits can prove advantageous because:

1. The standard gate time in case of Spin-Qubits is within 10s of ns while the qubit decoherence time is usually between 100s of micro-seconds (which is nearly an eternity in quantum word). Decoherence is caused by electron's interaction with nuclear spin of the Si/SiGe substrate. Changing the substrate can allow further longer decoherence times up to 10s of Milli-seconds.

2. The Spin-qubits are one of the most promising candidates for long-term 'scalable' quantum computing. One-qubit occupies very less area on chip (70nm × 70nm). These can be arranged in arrays and has potential to scale to millions of qubits for universal quantum computing (1 billion qubits in 1cm × 1 cm chip area).

3.Typically the spin-qubits are operated at temperatures of 10mK, but it has the potential to be operated at 1K to 4K. This is a very important point as classical CMOS can also be used at temperatures up to 4K which would allow manufacturing (in future) classical control and quantum-dot array units to be fabricated on the same silicon die, thus, eliminating complicated wiring for controlling these qubits (True Quantum Integrated Circuits). These cryo-electronics would have to be faster, less noisy and must have very low power dissipation.

4. Manufacturability of Spin-qubits is an important criteria. The current manufacturing is done by 300mm technology, which is an already perfected technology (such as, by Intel).

It is however, also important to remember that current quantum computing technologies are prone to high error rates and qubit decoherence. While the error-rates in spin-qubits are better than superconducting qubits; for the Spin-Qubit processor to function as a NISQ-era (Noisy Intermediate Scale Quantum) device (without quantum error correction); gate single-qubit gate error rates up to 10^(-5) or better (which currently, is not possible).

The above points seem very promising, but there are important problems that lie ahead:

1. Fabrication is an imperfect process and qubits are very sensitive. The Spin-qubit model involves tunneling a single electron into a quantum dot by applying gate-voltage and manipulation of the electron by means of AC pulses. Fabrication would not be perfect hence, these gate-voltages (to control electro-chemical potential) would vary for different qubits. Uniformity is essential. Further, the current research Spin-qubits are arranged linearly, which allow qubit mapping, only with it's neighboring qubits. The next steps (for next 2-5 years) would be arranging the same as a 2D arrays for better mapping; to allow better execution of quantum algorithms

2. Scalable control is perhaps one of the most important considerations for actually using a qubit. It is shown that 40+ connection lines is required for using a Si-Spin 5-qubit chip, among which 14 are high-bandwidth (fast signal) connections. At this scaling rate, the number of classical control channels required to use qubits would explode. This is absolutely undesirable and is thus, the issue addressed in this blog, in part 2.

Thursday, November 8, 2018

Architecting an ASIC: Notes on VLSI Chip Design

"I am a quantum engineer, but on Sundays, I have principles."
- John Bell

Today VLSI capability allows to integrate multi-billion gates on a silicon chip. And therefore, architecting a (System-on)-Chip also needs to be a methodical process.

Questions to ask before starting with a SoC Chip-Design?[1]
  1. Power vs Performance: Is the chip designed to be power efficient or able to deliver performance?
  2. Is the design an embedded processor (that runs algorithms) or algorithms need to be running on the hardware?
  3. What kind of processor is necessary with what amount of local memory?
  4. Internal SRAM vs External DRAM (chip)?
  5. Local Storage vs External SSD/HDD storage?
  6. External (Communication) Interfaces - PCIe, FMC, SMA, Eth?
  7. Ethernet Connectivity/Ethernet Controller required?
  8. How many clocks would be required and what frequencies?
  9. What is going to be the reset scheme?
  10. Is internal caching required to improve performance (reduce latency)?
  11. Gate Count Budget, Power Budget, Pin budget?
  12. Is it going to be pad-limited or core-limited?
  13. Flexibility; Design Something modular (so that the core remains the same and external features can be modified for different market segments)?
  14. All in house vs using IPs?
  15. Can customizing standard IPs provide performance? (check with vendors for rights)
Architecture and Micro-architecture
Micro-architecture can be divided into three parts:
- Partitioning the Chip
- Data path within the Chip
- Control Functions

1. Partitioning the Chip: Break down the idea into easier-to-understand functionalities (Divide-and-Conquer).
2. Datapath Design: Do I need FIFOs, Multiplexers, Adders, Multipliers, ECC, CRC, Encoding/decoding, scrambling/encryption? Internal data-path size - 32-bit, 64bit, 128-bit?
3. Control Functions: Do I need state-machines? How many? Encoded State-Machines vs One-hot state machine? Full Handshake vs half-handshake in data transfer between different clock domains?

[For a good design, the above 3 should be followed in the same order as given above. Directly jumping into control logic doesn't help accelerate things, it makes it complicated.]

KISS and the 80-20 rule:
One of the referred strategies to digital design is KISS (Keep It Simple Stupid). 'Simplicity' refers to a design that is simple and meets all the functionality criteria. Further, simpler designs are easy to understand, debug and port.
Further, understanding trade-offs in a digital design is equally important. The 80-20 rules is a good approximation. Think of it like this: only 20% of your design will work 80% of the time and the rest, 80% of it would work only 20% of the times. In such as scenario, what would be the best configurations for a design and which sub-systems (functionalities) require more optimizations/improvements.

Understanding Errors:
hardware architectures are prone to errors timing mismatch in parallel lines, Metastability issues, etc. A hardware extension to handling these errors is a good idea but often these errors can be reported back to the software (which has the capabilities to handle these errors with more flexibility.)

Getting Started with HDL (VHDL/Verilog):
There are often many questions found throughout the internet inquiring what is the best way to get started and become proficient with one of the hardware description languages? The answer is quite simple, to be honest (and is expressed in more than one ways as answers to those questions.): Strengthen you digital design principles and practice coding the simple designs.

A good place to get started with this is:
I will make sure to form a Git Repository for the same and many more of such problems that would help get started with HDL.

As you progress, you figure the best design practices and what tools to use, on your own. I will dedicate a separate post on the best design practices I have figured till now.

[1] Advanced Chip Design: Practical Examples in Verilog by Kishore Mishra.

Friday, October 12, 2018

Quantum Information: Part 1

The idea of (classical) Information connects to bits (which is basically a 1 or a 0 stored in a particular memory location) and manipulation of information should be regarded a topic in computer science. However, we can also interpret the information processing as a physical phenomenon as information is always encoded in some physical state. And ofcourse information processing is carried out in a physically realizable system. With this thought, information can infact be linked to the study of undelying physical processes.

Physics of Information.
Landauer's Principle. Rolf Landauer in 1961 presented that information erasure is a dissipative process. It always involves compression of phase space and so is irreversible. Say, if 1 bit of information is represented by keeping a molecule in a box on either left or right side of a defined boundary (left = 0 and right=0); and using a movable piston we force the molecule to move (with absolute certainity) to left side (irrespcective of it's initial state), then this process reduces the entropy of the gas by ΔS = k ln2. If process is isothermal at temperature T, this energy is dissipated from box to the environment and work done in doing so is W = kT ln2.

In Classical Computing, the (1+ input) gates are irreversible (aka given output of a gate, I cannot infer with certainity what the inputs were). Say, NAND [ NAND(a,b)=NOT(a AND b) ] gate takes two inputs and produces one output. So, there was an erasure. Thus at least W = kT ln2 work is needed to operate the gate (theoretical limit?).

Charles Bennet in 1973 showed that any computation can be carried out using reversible gates and so, theorically would require no power expenditure. For example, Toufolli Gate [ TOFFOLI(a,b,c)=(a,b, c XOR (a AND b)) ]is reversible equivalent of a NAND gate (3 input and 3 output). Thus replace all NAND with TOUFOLLI and we get a reversible circuit. But a lot of extra bits are generated along and perhaps this has only caused the energy cost to be delayed. Bennet explained this as: first compute through the circuit, print a copy of the answer (a logically reversible operation) and reverse through the circuit to get to the initial state. In practice, reversible gates dissipate orders of magnitude per gate greater than indicated limit.

The resolution of Maxwell's Demon problem is also associated with information erasure. Considering the memory of demon to remember the energies of atoms passing through the gate is finite; we can state that, at some point the old information has to be erased and the power cost is thus paid. Leo Szilard (a pioneer in physics of information) in 1929 in the analysis of Maxwell's Demon problem, invented the concept of bit (the term "bit" was later coined by John Tukey).

Quantum information.
Nature is quantum mechanical. Eg. Clicks registered in the geiger counter exhibit a truely radom poisson process.In classical dynamics, there is no place for true randomness (though chaotic systems exhibit behaviour that is indistinguishable from true randomness). In quantum theory, by performing measurement on one of two non-commuting observables (Observables don't commute if they can't be simultaneously diagonalized i.e. if they don't share an eigenvector basis. Non-Commuting observables follow the uncertainity principle.) alters the state of other. Thus measuring disturbs the state of the system. There is no counter-part to this limitation in classical mechanics.
(Also read, Coppenhagen Interpretation).

"The tradeoff between acquiring information and creating a disturbance is related to quantum randomness. It is because the outcome of a measurement has a random element that we are unable to infer the initial state of the system from the measurement outcome."

No Cloning Theorem. If we could copy a quantum state, we could make a copy of the original and measure the copied one without alterning the original state; thus, beating the principle of disturbance. Unlike classical bits, quantum bits (qubits) cannot be copied with perfect fidelity. This is established as the No Cloning Theorem.

The difference between classical and quantum information was shown precisely by John Bell in 1964 during his time at CERN. Bell showed that "quantum information can be (in fact, typically is) encoded in non-local correlations between different parts of a physical system, correlations with no classical counterpart." (More on this, later.)

Next Post: Quantum Algorithms, Quantum Complexity and Quantum Parallelism.

Credits: Notes by John Preskill on Quantum Information (Caltech). The complete work can be accessed here.

Friday, September 29, 2017

Quantum Computing: A motivation

As a lay man, I used to wonder what quantum is? Is it breaking into another abstraction of the sub atomic? Can we harness quanta? What does it look like? Just breaking into the ice, I'd quote it simply, Quantum is real. And along comes its spectacular properties.

Two weeks have been passed since I dived into Quantum Computing. It is motivating indeed to think of the quantum phenomena of entanglement and quantum teleportation. But as intriguing and spectacular the world of quantum is, even more is the complexity. Prof Feynman once said, "If you think you understand quantum mechanics, you don't understand quantum mechanics". That was a thought somewhere in the corner of my head while choosing this as a major, but despite the odds it needs to be done. Thus, I sit here in the classroom of roughly 150 (and a few people live from University of Twente and TU Eindhoven). My professors are Leo DiCarlo and David Elkouss from QuTech in Applied Sciences. They are one of the best and leading research scientists in the Quantum World. I was reading about the loophole free bell-test that was performed at TU Delft a couple of years ago and these people were the ones who actually did that! They disproved Einstein! (Read: Loophole Free Bell Test - Nature)

Quantum starts with qubits, the fundamental unit of quantum computation, a state that can be |0⟩, |1⟩ or a superposition of |0⟩ and |1⟩, represented as |ψ⟩ = α|0⟩ + β|1⟩. This is the only basic stuff that you can actually take away from your first lecture. As you progress, qubits have representation, along is qubit measurements and qubit computation using quantum gates. The beauty of quantum fundamentals appears when we represent it mathematically and we can actually see phenomena being proved through maths. Its all in the math!

Why do I study quantum, you ask? A theory that was once unacceptable to Einstein, is emerging. Moore's Law got the world of digital computing so far. What is the future of computing? How would you solve large computations, so complex, not be solved using parallel computing or even super computers? The answer lies in quantum. As an analogy, here's a fact: To simulate a quantum computer with 100 qubits we would require twice the amount of data storage as is the total data of the world, at present, which is a few Zetabytes.

If you ask what other problems can be solved on a quantum computer? The implications are immense - optimization problems, molecule simulations, airplane simulation, genome sequencing, chemistry and drug development. In fact, to be fair, the true potential of quantum computing (quantum supremacy) is not know yet.

Once remarked as "entirely useless" field of physics, Quantum Mechanics is no doubt a weirdo of Physics. Coming up with good quantum algorithms is hard. But Why? Our human intuition is rooted in the classical world. To design good quantum algorithms, one must turn-off one’s classical intuition for at least part of the design process using truly quantum effects to achieve the desired algorithmic end. And, It is not enough to design that is only quantum mechanical. It must be better than any existing classical algorithm.

Quantum Computation and Quantum Information requires quantum mechanics, computer science, information theory and cryptography. And most important of all, it requires a leap in the imagination to progress in such a field of study. After these two weeks, I can say Quantum is fun, It just needs to be viewed right.