Biotechnology and pharmaceutical companies will always need to test their drugs for any adverse effects on humans. In fact, the European Community’s REACH (Registration, Evaluation, and Authorization of Chemicals) regulations specifically require that all new and existing chemicals within the European Union, involving approximately 30,000 chemicals, should be tested for hazardous effects.
As a result, there are currently nearly 200 different establishments licensed by the Home Office to breed, supply or use animals in research or testing in the UK, including universities, pharmaceutical and chemical companies and contract research organisations (CROs). During 2010, nearly 4m scientific procedures were carried out on animals in the UK, according to research published by the Home Office.
However, at the same time, a ban on animal testing required by the Cosmetics Regulation 1223/2009 is due to come into effect from March 2013. As a result, many researchers in this area are finding themselves caught in the middle, with one piece of legislation requiring them to test all of the chemicals contained within their products, alongside new regulations telling them that they cannot use animals for this purpose.
The testing of sensitising chemicals, in particular, has relied heavily on animal experimentation over the years, leading to calls to replace these experiments with in vitro tests. New research techniques are helping to achieve this objective, especially as such experiments can help researchers to use fewer animals, yield more useful results and are often more cost-effective.
To date, no validated non-animal replacements are available for the identification of skin sensitising chemicals; instead, the tests today are carried out in mice or Guinea pigs. Not only would in vitro alternatives to these animal tests exhibit improved reliability and accuracy, but they would also correlate more closely to human reactivity. Already, new insights in this area have led to several in vitro tests currently under review by the European Centre for the Validation of Alternative Methods (ECVAM).
‘Worldwide, more and more people are suffering from allergies, which means that this area has become an important health concern,’ says Ann-Sofie Albrekt at Lund University’s department of immunotechnology in Sweden. ‘As a scientist, I am interested to find out why otherwise harmless compounds can often elicit an adverse immune response in humans.’
Albrekt and the team, headed by Malin Lindstedt, have recently developed a novel cell-based assay called Genomic Allergen Rapid Detection (GARD) for the prediction of sensitising chemicals (BMC Genomics. 2011, 12:399). By analysing the transcriptome or RNA content of the human cell line MUTZ-3 after 24-hour stimulation, using 20 different sensitising chemicals and 20 non-sensitising chemicals and vehicle controls, the researchers identified a biomarker signature of 200 genes with potent discriminatory ability.
In addition, by categorising the chemicals according to the murine Local Lymph Node assay (LLNA) this gene signature also shows potential in terms of predicting sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitisation.
‘We were delighted with these results, but we were also left with a number of important questions,’ says Albrekt. ‘Throughout our studies, we were trying to connect different chemical properties to sensitising potency by looking at chemical reactivity, structure, molecule size, and so on. However, we really need more input here, from scientists and researchers with a good understanding of chemistry. This extra knowledge would make an important difference to our studies, and ultimately the results that we can produce.’
Based on these studies, researchers have now identified a highly accurate gene signature predicting sensitisation, using a human cell line in vitro. This simple cell-based assay could replace entirely or drastically reduce the use of test systems based on animals and is expected to be more accurate for predicting sensitisation in humans, as demonstrated in follow-up experiments.
‘Although gene expression studies are proving invaluable to the study of allergens, the amount of data produced by these experiments is enormous,’ Albrekt says. ‘It is impossible to derive any real biological meaning from these findings unless sophisticated data algorithms are used to help interpret these data effectively. However, some data analysis applications can be very complicated and difficult to use, even for specialist statisticians, so it is very important to find software that has been developed by scientists, for scientists.’ She is currently using sophisticated data analysis software called Qlucore Omics Explorer.
Most software in this area has focused on the ability to handle increasingly vast amounts of data, which means that the role of the scientist/researcher has been largely set aside. A lot of data analysis has been passed on to bioinformaticians and biostatisticians. However, this has several drawbacks, since it is typically the scientists themselves who know the most about biology.
This type of bioinformatics software now allows scientists to analyse very large data sets by combining statistical methods and visualisation techniques, such as Heatmaps and Principal Component Analysis (PCA). Scientists studying allergens and other aspects of human biology can now easily analyse their data in real-time, directly on their computer screen with instant user feedback on all actions, as well as an intuitive user interface that can present all data in 3D.
The performance of data analysis software has greatly improved in the past couple of years. According to Albrekt, modern data analysis software can be used to transform high dimensional data down to lower dimensions, which can then be plotted in 3D on a computer screen and rotated manually or automatically, so they can be examined by eye.
‘When you are looking at such a large amount of genetic data, there is bound to be a number of confounding factors that distort the data,’ says Albrekt. ‘The ability to remove this “noise” is very important, in order for researchers to be sure that they are working with the most reliable data. Advanced data analysis software... makes it much easier to make a qualified judgment about the amount of noise present, so that researchers can see true patterns as they emerge.’
In fact, with key actions and plots now displayed within a fraction of a second, scientists can increasingly perform the research they want and find the results they need instantly – without the wait. This approach has helped to open up new ways of working with the analysis and, as a consequence, has helped to bring the biologists back into the analysis phase, which means that bioinformaticians and biostatisticians are free to focus on their own areas of interest and expertise.
Albrekt typically begins work by coding any interesting factors – and confounding factors – into a single file. She then imports these data and looks at the pattern of samples in order to search for both anticipated and non-anticipated sub-patterns. At this point, Albrekt can examine the sub-patterns using the coded factors that she had identified earlier, and then look for any significant differences by using statistical tests.
‘We were able to test the robustness of our findings by using kNN visualisation, randomisation and permutation tools,’ she explains. ‘That way, we were able to make a decision on which variables to trust, annotate any significant variables that we had found, then export them for functional analysis using another software tool.’
In order to reduce the large number of identified significant genes, Albrekt and the team applied an algorithm developed in-house for ‘Backward Elimination’ of analytes. The selected biomarker profile of 200 transcripts were designated the ‘Prediction Signature’. Additional analyses and the visualisation of results with Principal Component Analysis were performed in Qlucore Omics Explorer.
With the speed and flexibility of this approach, Albrekt and the team were able to evaluate and test a number of different scenarios and hypotheses in minutes. This technique makes it possible for researchers to combine very large amounts of data, and conduct analyses in ways that were simply not possible before.
‘In our studies, we are dealing with very large amounts of data, sometimes between 10 and 100m data points, which we tend to view as graphics. Before, these graphics would typically take a very long time to appear, but with the latest data analysis tools, the information is presented instantly,’ Albrekt says. ‘As a result, we can be much more creative with our theories, as we can easily test any number of hypotheses in rapid succession.’
‘The work we’re doing in this area will not only contribute to the reduction in the number of animals required for safety testing, but also the establishment of more accurate tools for product development,’ she says.
Carl-Johan Ivarsson is president of bioinformatics firm Qlucore, based in Lund, Sweden.