There are a lot of exciting Parts in the Registry, but many Parts have still not been characterized. Designing great measurement approaches for characterizing new parts or developing and implementing an efficient new method for characterizing thousands of parts are good examples.


The use of luminescence for the measurement of translation is a very popular method, because due to the lack of background noise even lowest signals can be measured and luminescence is therefore much more sensitive than fluorescence. A problem when comparing several measurement series is the specification of the measurement data in arbitrary units. Since the arbitrary relative unit varies from platereader to platereader, as well as from measurement to measurement within the same platereader, the absolute comparison of several measurements is difficult. Unlike for fluorescence, no protocol for normalizing luminescence signals is yet known in the iGEM community. We have therefore developed several protocols to calibrate luminescence measurement signals across measurements.

We have taken two different approaches to normalization, one based on chemiluminescence (Luminol normalization) and the second one based on bioluminescence (NanoLuc normalization).

Luminol Normalization

The motivation behind normalization based on a chemical reaction is the easy replicability of the reaction as well as the cheap availability of the chemicals.
Luminol normalization is based on the chemiluminescence expressed during the catalytic oxidation of luminol. The emission peak is at 424 nm. The fluorophore of the reaction is luminol, the catalyst is copper ions, and hydrogen peroxide is used as the oxidant. For the reaction to proceed, the pH must be basic.

Figure 1: Luminol reaction in the presence of a metal ion catalyst
Here Copper is used as a catalyst, but the reaction is known to be functional with other ions as well

The reaction was carried out by mixing the luminol solution with 30% hydrogen peroxide in a ratio of 4:1 (see protocol). After adding the catalyst, in our case Cu(II) ions from a copper sulfate solution, the reaction started. The plate reader protocol included a shaking step in the beginning for 30 seconds in order to mix the solutions homogeneously. In previous tests, only a slight decrease could be detected over time and therefore we propose to measure the luminescence in a standardized manner after 30 seconds of mixing. The excess of hydrogen peroxide circumvents the problem that the concentration of hydrogen peroxide in solution decreases with time as a metastable substance, since the concentration of hydrogen peroxide can be assumed to be constant during the course of the reaction. The result showed a linear correlation between the detected luminescence and the concentration of copper sulfate that was added. The replicates performed showed relative good agreement.

Figure 2: A calibration curve using Luminol and CuSO4 as an inducer for the reaction
The reaction was induced using dilutions of the copper sulfate and added as the last reagent. The calibration curve is not ideal, therefore this method might not be ideal for the normalization of luminescent data[1] [2]

Figure 3: Scheme of the reaction setup and the workflow using a 384 white well plate and a plate reader for analysis

NanoLuc Normalization

The second approach we followed is normalization via nanoluciferase (NanoLuc). The NanoLuc reaction has the advantage that it is significantly more sensitive than Firefly Luciferase or Renilla Luciferase[2]:

Figure 4: Comparison between NanoLuc and Firefly expression using the same construct, except for the the according coding sequence

A protocol for a NanoLuc assay has been published by Promega. The associated Nano-Glo® Luciferase Assay System contains the luciferase assay substrate (Furimazine) as well as the luciferase assay buffer and an integral lysis buffer. Addition of luciferase proteins starts the reaction and emission occurs with an emission peak at λmax = 460 nm.
The luminescence is linear to the concentration of luciferase protein in a concentration range above 1 million with a half-life of ~120 min, making NanoLuc a promising candidate for normalization.[3]
To test whether the NanoLuc assay is suitable for normalizing luminescence in the iGEM context, we made a series of tests in which we varied the concentration of purified NanoLuc using the Nano-Glo® Luciferase Assay from promega.
Since the plate reader truncates the signal from 10^7, only concentrations up to 10^2 pM and less were used.

A NanoLuc solution with a concentration of 560 pM was prepared by diluting the original protein solution by 1:1000. After this a dilution series was set up using a 384-well plate. Starting from well 1 we supplemented 100µl of the solution and transferred 50µl into the next wells, which we filled with 50µl of the protein elution buffer similar to the protocol for the normalization of fluorescence data using fluorescein. starting from a stock solution of 560 pM to a dilution of 1:64 (8,75 pM). The last well included only the elution buffer in which the protein has been purified. From these dilutions we used 10µl and transferred them to a white 384-well plate and supplemented the wells with 10µl of the reaction mixture from the Nano-Glo® Luciferase Assay.

Figure 5: A calibration curve using purified NanoLuc
The regression line looks better than the normalization approach using Luminol. Therefore, this method is better suited for more accurate quantitative luminescent data

Figure 6: Workflow for normalization of luminescent data using purified NanoLuc protein

Proof of Concept

The NanoLuc calibration curve, including four replicates, showed that the outcome is reproducible. The protocol and materials provided by promega are affordable and easy to store making the NanoLuc Assay an excellent candidate for Normalization. The only downside is that the NanoLuc protein has to be purified by the iGEM team first in order to use this normalization approach. We suggest including freeze dried NanoLuc luciferase in future measurement kits that will hopefully be provided again to iGEM teams in the upcoming years.

Cell-Free Measurement

For all cell-free Measurements we performed this year we followed an established protocol in order to keep our generated data as comparable as possible. Every measurement was performed in a total volume of 10µl and included replicates in order to verify significance in the datasets. At the beginning of the project we noticed a huge crosstalk between neighboring wells of our well plates that cumulatively influenced luminescence values in our data sets. This effect was already described in literature, where a software tool was implemented in order to negate this effect as best as possible[4]. To better understand this effect and how it affects our data we compared different plate setups and came to the conclusion that for us a software free solution would be best in that it does not introduce yet another software tool to our workflow and keep it as simple as possible. As a result we decided to only populate every second well with a cell-free reaction, effectively leaving one well space in each direction of an individual reaction. Judging from the activity observed for our negative control, we were able to show that this effect could be indeed negated using our approach. As it showed no significant difference between measurements where we only measured the background and when there was a highly active cell-free reaction 2 wells apart from our negative control.

The Problem of Batch 2 Batch Variation

Before we planned the design of our genetic construct we became aware of a problem that could fundamentally negate any data comparison. In Literature, an effect termed the batch-effect was introduced in a study where high-throughput screening of protoplasts of Arabidopsis thaliana and Sorghum bicolor (Millet) was used in order to characterize genetic circuits. Their design involved synthetic repressible promoters using several repressor binding sites. When they measured basal levels of the promoter without addition of any repressor, they noticed a big difference in expression between preparations performed on different days. This variance stems from variations in leaf tissue health, the isolation of different cell types, shear-stress during pipetting and centrifugation and slight variations among enzymatic supplies[2].
With this problem at hand they proposed to use a ratiometric normalization method in order to normalize their data, accounting for the batch effect.

Firefly Normalization

This previously described batch effect was also notable in our cell-free extracts as the plants we grew for the extraction could not be held under the same conditions for every extraction preparation. In order to cope with this effect we implemented the approach from Schaumberg et al and included a dual luciferase approach in the design of our measurement constructs. Therefore, we included a Firefly luciferase cassette with the same regulatory sequences among every lvl2 construct. The second cassette harbours a NanoLuc luciferase transcriptional unit that is subject to change and is used to characterize individual genetic parts in. For this we made use of special placeholder sequences introduced by iGEM Marburg 2019. This enabled us to pre-assemble different lvl2 acceptor vectors with placeholder sequences in different positions of our cloning system. With this we could assemble measurement vectors in a way shorter timeframe with much higher efficiency. We reasoned that the normalized ratio between FLuc and NLuc expression can be used to compare gene expression measured on different days.

Normalized Ratio

In order to put this approach to the test, we tested a total of 5 parts in two different batches of chloroplast cell-free extracts from Tobacco that showed promising expression strengths in experiments prior (Figure 1 and 2).
The parts we tested were:

  1. rbcL 5’UTR Nicotiana tabacum
  2. Synthetic RBS
  3. Gene10 5’UTR T7 phage
  4. TMV 3’UTR
  5. Rpl32 3’UTR Oryza sativa
Figure 7: 5'UTR characterization using our lvl2 measurement vector with a dual luciferase approach (Nluc and Fluc)
Figure 8: 3'UTR characterization using our lvl2 measurement vector with a dual luciferase approach (Nluc and Fluc)

The raw values of the characterization experiment have been normalized using the ratio of Nluc signal divided by the Fluc signal for each individual construct. As the Fluc signal should in theory stay the same throughout the different constructs (same regulatory sequences used), the normalization of the Nluc is possible. Figure 9 shows the comparison of 5 different regulatory elements in 2 different Tobacco chloroplast cell-free extracts. The data we got was very comparable between the two batches supporting our method of normalization using a dual-luciferase approach.

Figure 9: 5 parts have been tested in 2 different Tobacco chloroplast cell-free extracts
The values are displayed as the ratio of Nluc signal divided by the Fluc signal of the individual measurement. The two different batches of Tobacco extract are displayed as two different green tones

Figure 10: Correlation graph of the data from Figure 9
The data points are in close proximity to the regression line. The graph shows a near perfect linear correlation between the expression the different parts

  3. Sheng-Xiang He, Ge Song, Jia-Ping Shi, Yu-Qi Guo, Zhan-Yun Guo, Nanoluciferase as a novel quantitative protein fusion tag: Application for overexpression and bioluminescent receptor-binding assays of human leukemia inhibitory factor, Biochimie, Volume 106, 2014, Pages 140-148, ISSN 0300-90 4,
  5. ACS Synth. Biol. 2019, 8, 6, 1361–1370 Publication Date:May 16, 2019
  6. Schaumberg, K., Antunes, M., Kassaw, T. et al. Quantitative characterization of genetic parts and circuits for plant synthetic biology. Nat Methods 13, 94–100 (2016).