# Balanced calibrations¶

The default calibration method for M3 is what is called “balanced” calibration. We came up with this method as an intermediary step between truly “independent” calibrations that run two circuits for each qubit to get the error rates for \(|0\rangle\) \(|1\rangle\), and “marginal” calibrations that execute only two circuits \(|0\rangle^{\otimes N}\) and \(|1\rangle^{\otimes N}\). These two methods are expensive, or can lead to inaccurate results when state prep errors are present, respectively.

Balanced calibrations run \(2N\) circuits for \(N\) measured qubits, but the calibration circuits are chosen in such a way as to sample each error rate \(N\) times. For example, consider the balanced calibration circuits for 5 qubits:

```
from qiskit.test.mock import FakeAthens
import mthree
mthree.circuits.balanced_cal_strings(5)
```

```
['01010',
'10101',
'00110',
'11001',
'00011',
'11100',
'00001',
'11110',
'00000',
'11111']
```

For every position in the bit-string you will see that a 0 or 1 appears N times. If there is a 0, then that circuit samples the \(|0\rangle\) state for that qubit, similarly for the 1 element. So when we execute the 2N balanced calibration circuits using shots number of samples, then each error rate in the calibration data is actually being sampled N*shots times. Thus, when you pass the shots value to M3, in the balanced calibration mode internally it divides by the number of measured qubits so that the precision matches the precison of the other methods. That is to say that the following:

backend = FakeAthens() mit = mthree.M3Mitigation(backend) mit.cals_from_system(shots=10000)

Will sample each qubit error rate 10000 times regardless of which method is used. Moreover, this also yields a calibration process whose overhead is independent of the number of qubits used.