In this new series, members of the SCI Mid-Career group offer advice on career management and how to overcome career challenges.
In our latest interview, we hear from David Freeman, Research & Technology Director for Croda’s Energy Technologies business.
Please tell us about yourself and your career journey.
After a PhD in organic chemistry, I started my career with ICI Paints in Slough in 1998, working in a product development role. Within a couple of years, I moved to another ICI business, Uniqema, and had various technical roles around the chemical synthesis or process development of new materials.
These early roles – and the people I worked with during this time – had a big impact on me in terms of ways of working and how to deal with people. I subsequently joined Croda in 2006 and have since had further technical roles – initially around the technical management of Synthesis programmes in Croda, then technical management of Applications programmes, and finally on to my current role of R&T Director for Croda’s Energy Technologies business.
This last transition was probably the most interesting and challenging as it forced me to think much more strategically about the “what” rather than the “how” and what leadership versus management was all about. I see this area as being hugely important to the Mid-Career group.
What are your keys to managing your career at this stage?
Development remains really important to me from a personal perspective. I have always driven my own development, but been well supported by the organisations I’ve worked for: both by technical management teams and HR teams. At the mid-careers stage, there are lots of important things to think about but I consider the following to be key:
What challenges are there around mid-career support?
I feel very fortunate to have worked for organisations where development is extremely important – support is always on hand when I need it. The key challenge is a personal one and it’s about making enough time to focus on the right development areas. We are all busy but if we want to develop ourselves enough, then we will find that time!
We are increasingly conscious of the need to recycle waste products, but it is never quite so easy as rinsing and sorting your waste into the appropriate bins, especially when it comes to plastic.
Despite our best intentions, only around 16% of plastic is recycled into new products — and, worse, plastics tend to be recycled into low quality materials because transformation into high-value chemicals requires substantial amounts of energy, meaning the choices are either downcycling or prohibitively difficult. The majority of single-use plastics end up in landfills or abandoned in the environment.
This is a particular problem when it comes to polyolefins such as polyethylene (PE) and polypropylene (PP), which use cheap and readily available raw materials. Approximately 380 million tonnes of plastics are generated annually around the world and it is estimated that, by 2050, that figure will be 1.1 billion tonnes. Currently, 57% of this total are polyolefins.
Why are polyolefins an issue? The strong sp3 carbon–carbon bonds (essentially long, straight chains of carbon and hydrogen atoms) that make them useful as a material also make them particularly difficult to degrade and reuse without intensive, high energy procedures or strong chemicals. More than most plastics, downcycling or landfill disposal tend to be the main end-of-life options for polyolefins.
Polyethylene is used to make plastic bags and packaging.
Now, however, a team of scientists from MIT, led by Yuriy Román-Leshkov, believe they may have made a significant step towards solving this problem.
Previous research has demonstrated that noble metals, such as zirconium, platinum, and ruthenium can help split apart short, simple hydrocarbon chains as well as more complicated, but plant-based lignin molecules, in processes with much lower temperatures and energy.
So the team looked at using the same approach for the long hydrocarbon chains in polyolefins, aiming to disintegrate the plastics into usable chemicals and natural gas. It worked.
First, they used ruthenium-carbon nanoparticles to convert more than 90% of the hydrocarbons into shorter compounds at 200 Celsius (previously, temperatures of 430–760 Celsius were required).
Next, they tested their new method on commercially available, more complex polyolefins without pre-treatment (an energy intensive requirement). Not only were the samples completely broken down into gaseous and liquid products, the end product could be selected by tuning the reaction, yielding either natural gas or a combination of natural gas and liquid alkanes (both highly desirable) as preferred.
Polypropylene is used in bottle caps, houseware, and other packaging and consumer products.
The researchers believe that an industrial scale use of their method could eventually help reduce the volume of post-consumer waste in landfills by recycling plastics to desirable, highly valuable alkanes — but, of course, it's not that simple. The team says that more research into the effects of moisture and contaminants in the process is required, as well as product removal strategies to decrease the formation of light alkanes which will be critical for the industrialisation of this reaction.
However, they believe the path they're on could lead to affordable upcycling technology that would better integrate polyolefins into the global economy and incentivise the removal of waste plastics from landfill and the environment.
More about the study can be read here:
This tobacco (Nicotiana tabacum) relative was first planted in the SCIence Garden in the summer of 2018. It was grown from seed by Peter Grimbly, SCI Horticulture Group member. Although normally grown as an annual, some of the SCIence Garden plants have proven to be perennial. It is also gently self-seeding across the garden. It is native to the south and southeast of Brazil and the northeast of Argentina but both the species and many cultivars of it are now grown ornamentally across Europe. Flower colour is normally white, but variants with lime green and pink through to darker red flowers are available.
Like many Nicotiana this species has an attractive floral scent in the evening and through the night. The major component of the scent is 1.8-cineole. This constituent has been shown to be a chemical synapomorphy for the particular section of the genus Nicotiana that this species sits within (Raguso et al, Phytochemistry 67 (2006) 1931-1942). A synapomorphy is a shared derived character – one that all descendants and the shared single ancestor will have.
This ornamentally and olfactorily attractive plant was chosen for the SCIence Garden to represent two other (arguably less attractive) Nicotiana species.
Firstly, Nicotiana benthamiana, a tobacco species from northern Western Australia. It is widely used as a model organism in research and also for the “pharming” of monoclonal antibodies and other recombinant proteins.
In a very topical example of this technology, the North American biopharmaceutical company Medicago is currently undertaking Phase 1 clinical trials of a Covid-19 vaccine produced using their plant-based transient expression and manufacturing technology.
Secondly, Nicotiana tabacum, the cultivated tobacco which contains nicotine. This alkaloid is a potent insecticide and tobacco was formerly widely used as a pesticide.
This vivid extract from William Dallimore’s memoirs of working at Royal Botanic Gardens, Kew illustrate how tobacco was used in the late Victorian era.
“Real tobacco was used at Kew for fumigating plant houses. It was a very mixed lot that had been confiscated by excise officers, and it was said that it had been treated in some way to make it unfit for ordinary use before being issued to Kew. With the men working in the house ten men were employed on the job. After the first hour the atmosphere became unpleasant and after 1 ½ hours the first casualties occurred, some of the young gardeners had to leave the house. At the conclusion there were only the two labourers the stoker and one young gardener to leave the house, I was still about but very unhappy. Each man employed at the work, with the exception of the foreman, received one shilling extra on his week’s pay.“
After a second such fumigation event it was reported that there was a great reduction in insect pests, particularly of mealy bug and thrips, with a “good deal of mealy bug” falling to the ground dead.
Health and safety protocols have improved since the Victorian era, but the effectiveness of nicotine as an insecticide remains. From the 1980’s through the 1990’s a range of neo-nicotinoid plant protection agents were developed, with structures based on nicotine. Although extremely effective, these substances have also been shown to be harmful to beneficial insects and honey bees. Concerns over these adverse effects have led to the withdrawal of approval of outdoor use in the EU.
Imidacloprid – the first neo-nicotinoid developed
In early 2020, the European commission decided not to renew the European license for the use of Thiacloprid in plant protection, making it the fourth neo-nicotinoid excluded for use in Europe.
Where the next generation of pest control agents will come from is of vital importance to the horticulture and agriculture industries in the UK and beyond and the presence of these plants in the garden serves to highlight this.
Dinosaurs were some of the largest creatures to ever roam the Earth, but the mystery of how they supported their great weight remains. A new study published in PLOS ONE now indicates that the answer may lie in their unique bone structure, which differs from mammals and birds.
The bone is made up of different layers of different consistency, including the spongy interior, or trabecular. This part of the bone is formed of porous, honeycomb like structures.
A group of inter-disciplinary researchers, including palaeontologists, mechanical engineers, and biomedical engineers, analysed trabecular bone structure in a range of dinosaur samples, ranging from only 23 kg to 8000 kg in body mass. Their study found that the structure of dinosaur bones possessed unique properties allowing them to support large weights.
‘The structure of the trabecular, or spongy bone that forms in the interior of bones we studied is unique within dinosaurs,’ said Tony Fiorillo, palaeontologist and one of the study authors. ‘Unlike in mammals and birds, the trabecular bone does not increase in thickness as the body size of dinosaurs increase, instead it increases in density of the occurrence of spongy bone. Without this weight-saving adaptation, the skeletal structure needed to support the hadrosaurs would be so heavy, the dinosaurs would have had great difficulty moving.’
Their analysis included scanning the distal femur and proximal tibia bones from dinosaur fossils, and modelling how mechanical behaviour may have occurred. The research team also used allometry scaling – a method of understanding how physical characteristics change with physical size. They then compared the architecture of the bones to scans of both living and extinct large animals, such as Asian elephants and mammoths.
Researchers hope that they can apply their findings to design other lightweight structures such as those used in aerospace, construction, or vehicles.
‘Understanding the mechanics of the trabecular architecture of dinosaurs may help us better understand the design of other lightweight and dense structures,’ said Trevor Aguirre, mechanical engineer and lead author of the paper.
Soil is a very precious asset whether it be in your garden or an allotment. Soil has physical and chemical properties that support its biological life. Like any asset understanding its properties is fundamental for its effective use and conservation.
Soils will contain, depending on their origin four constituents: sand, clay, silt and organic matter. Mineral soils, those derived by the weathering of rocks contain varying proportions of all four. But their organic matter content will be less than 5 percent. Above that figure and the soil is classed as organic and is derived from the deposition of decaying plants under very wet conditions forming bogs.
Essentially this anaerobic deposition produces peat which if drained yields highly fertile soils such as the Fenlands of East Anglia. Peat’s disadvantage is oxidation, steadily the organic matter breaks down, releases carbon dioxide and is lost revealing the subsoil which is probably a layer of clay.
Cracked clay soil
Mineral soils with a high sand content are free draining, warm quickly in spring and are ‘light’ land. This latter term originates from the small number of horses required for their cultivation. Consequently, sandy soils encourage early spring growth and the first crops. Their disadvantage is limited water retention and hence crops need regular watering in warm weather.
Clay soils are water retentive to the extent that they will become waterlogged during rainy periods. They are ‘heavy’ soils meaning that large teams of horses were required for their cultivation. These soils produce main season crops, especially those which are deeply rooting such as maize. But in dry weather they crack open rupturing root systems and reducing yields.
Silt soils contain very fine particles and may have originated in geological time by sedimentation in lakes and river systems. They can be highly fertile and are particularly useful for high quality field vegetable and salad crops. Because of their preponderance of fine particles silt soils ‘cap’ easily in dry weather. The sealed surface is not easily penetrated by germinating seedlings causing erratic and patchy emergence.
Soil finger test
Soil composition can be determined by two very simple tests. A finger test will identify the relative content of sand, clay and silt. Roll a small sample of moist soil between your thumb and fingers and feel the sharpness of sand particles and the relative slipperiness of clay or the very fine almost imperceptible particles of silt. For a floatation test, place a small soil sample onto the top of a jam jar filled with water. Over 24 to 48 hours the particles will sediment with the heavier sand forming the lower layer with clay and silt deposited on top. Organic matter will float on the surface of the water.
Soil floatation test
As the COVID-19 outbreak increases pressure on the UK’s NHS services and frontline staff, leading scientists and businesses are taking on new initiatives to tackle the outbreak. As there is currently no treatment or vaccine for this virus, researchers are working at unprecedented speed to accelerate the development of a treatment. Businesses are putting in more effort to help those on the frontline of this global crisis.
INEOS has managed to built a hand sanitzer plant in the UK and will soon open the facility in Germany, aiming to produce 1m bottles per month each to address a supply shortage across the UK and Europe.
BASF will soon be producing hand sanitizers at its petrochemicals hub in Germany to address the shortage in the region.
Ramping up the supply of PPE, AstraZeneca is donating nine million face masks to support healthcare workers around the world. Alongside this, AstraZeneca is accelerating the development of its diagnostic testing capabilities to scale-up screening and is also partnering with governments on existing screening programmes.
Pharmaceutical company Novartis UK, along with several others, is making available a set of compounds from its library that it considers are suitable for in vitro antiviral testing.
GSK has announced that is donating $10 million to the COVID-19 Solidarity Response Fund. The Fund was created by the World Health Organisation (WHO) to help WHO and its partners to prevent, detect and manage the pandemic
Alongside the efforts and initiatives from industries, to continue to aid those on the frontline of this global crisis, social distancing interventions must remain to flatten the curve.
Research and data modelling has shown that policy strategies, such as social distancing and isolation interventions which aim to suppress the rate of transmission, might reduce death and peak healthcare demand by two-thirds.
Stopping non-essential contact can flatten the curve. Suppressing the curve means we may still experience the same number of people becoming infected but over a longer period of time and at a slower rate, reducing the stress on our healthcare system.
March in the SCIence Garden
Narcissus was the classical Greek name of a beautiful youth who became so entranced with his own reflection that he killed himself and all that was left was a flower – a Narcissus. The word is possibly derived from an ancient Iranian language. But the floral narcissi are not so self-obsessed. As a member of the Amaryllidaceae, a family known for containing biologically active alkaloids, it is no surprise to learn that they contain a potent medicinal agent.
Narcissus (and in particular this cultivar) are an excellent source of galanthamine, a drug more commonly associated with snowdrops (Galanthus spp.). Galanthamine is currently recommended for the treatment of moderate Alzheimer’s disease by the National Institute of Health and Clinical Excellence (NICE) but is very effective in earlier stages of the disease too.
Today, part of the commercial supply of this molecule comes from chemical synthesis, itself an amazing chemical achievement due to the structural complexity of the molecule, and partly from the natural product isolated from different sources across the globe. In China, Lycoris radiata is grown as a crop, in Bulgaria, Leucojum aestivum is farmed and in the UK the humble daffodil, Narcissus ‘Carlton’ is the provider.
Narcissus ‘Carlton’ growing on large scale
Agroceutical Products, was established in 2012 to commercialise the research of Trevor Walker and colleagues who developed a cost effective, reliable and scalable method for producing galanthamine by extraction from Narcissus. They discovered the “Black Mountains Effect” – the increased production of galanthamine in the narcissus when they are grown under stress conditions at 1,200 feet. With support from Innovate UK and other organisations, the process is still being developed. Whilst not a full scale commercial production process just yet, the work is ongoing. As well as providing a supply of the much needed drug, this company may be showing the Welsh farming community how to secure additional income from their land. They continue to look for partners who have suitable land over 1000 ft in elevation.
The estimated global patient population for Alzheimer’s in 2010 was 30 million. It is expected to reach 120 million by 2050. The global market for Alzheimer’s disease drugs for 2019 was US$ 2870 million.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry.
Discovery of this noble gas:
In 1894 argon was discovered by chemists Sir William Ramsay and Lord Rayleigh. Ramsay believed the presence of a heavy impurity in the ‘atmospheric’ nitrogen could be responsible for giving nitrogen a higher density when isolated from the air. Both scientists worked to discover this unrecognised new element hiding in the air, winning a Nobel Prize in 1904, primarily for their role in the discovery of argon.
Argon makes up 1% of the earth’s atmosphere and it is the most plentiful of the rare gases. Argon can be both used in its gaseous state and its liquid form. In its liquid state, argon can be stored and transported more easily, affording a cost-effective way to deliver product supply.
Argon as a narcotic agent
One of the most well-known biological effects of argon gas is in its narcotic capabilities. Sea divers normally develop narcotic symptoms under high pressure with normal respiratory air. These symptoms include slowed mental cognition and psychological instability. Argon exerts this narcotic effect in a physical way rather than in a chemical way, as argon, an inert gas, does not undergo chemical reactions in the body.
During the heating and cooling of printing materials, argon provides several benefits to this process. The gas reduces oxidation of the metal preventing reactions and keeping out impurities. This creates a stable printing environment as a constant pressure is maintained.
Future of argon
Argon as a clinical utility tool has received maximum attention. Although the potential benefits are still in the experimental stages, argon could be the ideal neuroprotective agent. Studies have shown that argon could improve cell survival, brain structural integrity and neurological recovery. These protective effects are also efficient when delivered up to 72 hours after brain injury.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on titanium and its various uses in industries.
What is titanium?
Titanium is a silver- coloured transition metal, exhibiting low density, high strength and a strong resistance to corrosion from water and chlorine. Suitably, titanium delivers many uses to various industries with approximately 6.6 million tonnes produced annually.
Titanium Dioxide is the most popular usage of titanium, composed of approximately of 90%. It is a white powder with high opacity; its properties have been made for a broad range of applications in paints, plastic good, inks and papers. Titanium dioxide is manufactured through the chloride process or the sulphate process. The sulphate process is the more popular process making up 70% of the production within the EU.
Titanium’s characteristics - lightweight, strong and versatile, make titanium a valuable metal in the aerospace industry. In order for aircrafts to be safely airborne, the aerospace industry need parts which are both light and strong, and at the same time safe. Thus, titanium is seen as the most ideal match for these specifications.
Titanium implants have been used with success, becoming a promising material in dentistry. As a result of its features, including its physiological inertia, resistance to corrosion, and biocompatibility, titanium plays an important role in the dental market.
However, despite this, the technologies and systems used in the machining, casting and welding of titanium is slow and expensive. Despite the wide availability of these technologies and systems used in the process of creating dental prosthesis from titanium, it does depend on the technological advancements and the availability of resources, to create a more profitable and efficient manufacturing process.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on sodium and its role in the next series of innovative nuclear energy systems.
Sodium; the sixth most abundant element on the planet is being considered as a crucial part of nuclear reactors. Implementing new safety levels in reactors is crucial as governments are looking for environmentally friendly, risk-free and financially viable reactors. Therefore, ensuring new safety levels is a main challenge that is being tackled by many industries and projects.
In the wake of Fukushima, several European nations and a number of U.S plants have shut down and switched off their ageing reactors in order to eliminate risk and safety hazards.
The sodium- cooled fast reactor (SFR), a concept pioneered in the 1950s in the U.S, is one of the nuclear reactors developed to operate at higher temperatures than today’s reactors and seems to be the viable nuclear reactor model. The SFR’s main advantage is that it can burn unwanted byproducts including uranium, reducing the need for storage. In the long run, this is deemed cost-competitive as it can produce power without having to use new natural uranium.
Nuclear reactor. Source: Hallowhalls
However, using sodium also presents challenges. When sodium comes into contact with air, it burns and when it is mixed with water, it is explosive. To prevent sodium from mixing with water, nitrogen - driven turbines are in the process of being designed as a solution to this problem.
A European Horizon 2020 Project, ESFR-SMART project (European Sodium Fast Reactor Safety Measures Assessment and Research Tools), launched in September 2017, aims to improve the safety of Generation-IV Sodium Fast Reactors (SFR). This project hopes to prove the safety of new reactors and secure its future role in Europe. The new reactor is designed to be able to reprocess its own waste, act more reliably in operation, more environmentally friendly and more affordable. It is hoped that this reactor will be considered as one of the SFR options by Generation IV International Forum (GIF), who are focused on finding new reactors with safety, reliability and sustainability as just some of their main priorities.
European Horizon. Source: artjazz
Globally, the SFR is deemed an attractive energy source, and developments are ongoing, endeavouring to meet the future energy demands in a cost-competitive way.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on lead and its place in the battery industry.
2019 is a critical year for the European Battery Industry. As policymakers set priorities to decarbonise the energy systems, whilst boosting Europe’s economic and technical performance, lead-acid batteries have become a viable player in the battery industry.
Increased government action and ongoing transformations to address the environmental situation has furthered global interest in the lead battery market, as they remain crucial in the battle to fight against the adverse effects of climate change. Subsequently, reliance on fuel technologies is lessening as we see a rise in the lead battery industry which had a market share of 31% in year 2018 with an annual growth rate of 5.4%.
According to reports by Reports and Data, the Global Lead- Acid Battery market is predicted to reach USD 95.32 Billion by 2026. Rising demand for electric vehicles and significant increases of this battery use in sectors including automotive, healthcare, and power industries, are a large push behind the growth in this market.
Thus, expansion of these sectors and particularly the automobile sector, means further development in this market will be underway, especially as it is the only battery technology to meet the technical requirements for energy storage on a large market scale.
Lead-acid battery is a rechargeable cell, comprising plates of lead and lead oxide, mixed in a sulfuric acid solution, which converts chemical energy into electrical power. The oxide component in the sulfuric acid oxidizes the lead which in turn generates electric current.
In the past, lead has fallen behind competing technologies, such as lithium-ion batteries which captured approximately 90% of the battery market. Although lithium-ion batteries are a strong opponent, lead still has advantages. Lead batteries do not have same fire risks as lithium-ion batteries and they are the most efficiently recycled commodity metal, with over 99% of lead batteries being collected and recycled in Europe and U.S.
Researchers are trying to better understand how to improve lead battery performance. A build-up of sulfation can limit lead battery performance by half its potential, and by fixing this issue, unused potential would offer even lower cost recyclable batteries. Once the chemical interactions inside the batteries are better understood, one can start to consider how to extend battery life.
Scottish chemist and past SCI President, Sir William Ramsay (1852–1916) came from a long line of scientists on both sides of his family and was described as ‘the greatest chemical discoverer of his time’.
Born in Glasgow, he showed a strong interest in science from a young age and, in his teenage years, he experimented with making fireworks, using materials acquired by his father.
He completed his doctorate in organic chemistry and later, in 1887, was appointed as the Chair of Chemistry at University College London, where he made his most renowned discoveries.
Working with British physicist John William Strutt (better known as Lord Rayleigh), the two men discovered an unknown gas. Owing to its apparent lack of chemical activity, they named the gas argon, meaning “the lazy one”.
After the co-identification of argon, Sir William Ramsay suggested that it be placed into the periodic table between chlorine and potassium in a group with helium. Due to the zero valency of the elements this was named the “zero” group.
From 1895 Ramsay spent three years trying to prove the theory of this new group of gasses, leading to the isolation of helium, neon, krypton and xenon. Eventually, a new column was added to the periodic table.
Ramsay was an outstanding experimentalist. He rolled his own cigarettes, claiming that machine-made ones were unworthy of an experimentalist such as himself.
In 1904, he was awarded the Nobel Prize in Chemistry “for his discovery of the inert gaseous elements in air, and his determination of their place in the Periodic system”. As a result, Ramsay became a considerable celebrity in London and was cartooned both by Spy for Vanity Fair and by Henry Tonks, Head of UCL’s Slade School of Art.
Ramsay ascribed his success in isolating the rare gases to his large flat thumb which could close the end of eudiometer tubes (graduated glass tube used to mix gases) full of mercury.
The group of elements that he discovered is now known commonly as the noble gases and is comprised of helium, neon, argon, krypton, xenon, and radon. Generally, they are chemically inert (they do not react with other elements) this is because they have the desired amount of total s and p electrons in their outermost energy orbital. However, only helium and neon are truly inert. Under very specific conditions, the other noble gases will react on a limited scale.
Today, the noble gasses are in wide use in the real world.
Argon is particularly important for the metal industry, due to the fact that it does not react with the metal at high temperatures. It is used in arc welding (a welding process that is used to join metal to metal by using electricity to create enough heat to melt metal) and is also used in light bulbs to prevent oxygen from corroding the hot filament.
Helium, one of the most common and lightest elements in the universe, is used for diluting the pure oxygen in deep-sea diving tanks. It’s also used to inflate the tires of large aircraft, weather balloons, blimps and party balloons.
Neon, which means ‘New one’ in Greek, is commonly used in colourful glass tube neon signs, it glows bright red when an electric current is sent through the gas, as it enters a plasma state. Other uses of Neon include in vacuum tubes, television tubes, and helium-neon lasers.
Krypton and xenon, valued for their total inertness, are used in photographic flash units, in lightbulbs and in lighthouses, as these elements generate a bright light when an electric current is run through them.
The original glass tubes that Ramsay used to isolate and collect his samples at UCL still exist today, they continue to glow red, yellow, purple and green, more than a century later.
Not only did Ramsay’s successes complete gaps in the periodic table, but he also paved the way for a deeper understanding of how the elements are connected, shaping our understanding today, a huge achievement that can be attributed in no small part to his experimental nature and his large flat thumb!
Controlling when and how vigorously plants flower is a major discovery in horticultural science. Its use has spawned vast industries worldwide supplying flowers and potted plants out-of-season. The control mechanism was uncovered by two American physiologists in the 1920s. Temperate plants inhabit zones where seasonal daylength varies between extending light periods in spring and decreasing ones in autumn.
Those environmental changes result in plants which flower in long-days and those which flower in short-days. ‘Photoperiodism’ was coined as the term describing these events. Extensive subsequent research demonstrated that it is the period of darkness which is crucially important. Short-day plants flower when darkness exceeds a crucial minimum, usually about 12 hours which is typical of autumn. Long-day plants flower when the dark period is shorter than the crucial minimum.
Irises are long day flowers. Image: Geoffery R Dixon
A third group of plants usually coming from tropical zones are day-neutral; flowering is unaffected by day-length. Long-day plants include clover, hollyhock, iris, lettuce, spinach and radish. Gardeners will be familiar with the way lettuce and radish “bolt” in early summer. Short-day plants include: chrysanthemum, goldenrod, poinsettia, soybean and many annual weed species. Day-neutral types include peas, runner and green beans, sweet corn (maize) and sunflower.
Immense research efforts identified a plant pigment, phytochrome as the trigger molecule. This exists in two states, active and inactive and they are converted by receiving red or far-red wavelengths of light.
Sunflowers are day neutral flowers. Image: Geoffery R Dixon
In short-day plants, for example, the active form suppresses flowering but decays into the inactive form with increasing periods of darkness. But a brief flash of light restores the active form and stops flowering. That knowledge underpins businesses supplying cut-flowered chrysanthemums and potted-plants and supplies of poinsettias for Christmas markets. Identifying precise demands of individual cultivars of these crops means that growers can schedule production volumes gearing very precisely for peak markets.
Providing the appropriate photoperiods requires very substantial capital investment. Consequently, there has been a century-long quest for the ‘Holy Grail of Flowering’, a molecule which when sprayed onto crops initiates the flowering process.
Chrysanthemums are short day flowers. Image: Geoffery R Dixon
In 2006 the hormone, florigen, was finally identified and characterised. Biochemists and molecular biologists are now working furiously looking for pathways by which it can be used effectively and provide more efficient flower production in a wider range of species.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on cobalt and its current and potential uses.
In 1739, Georg Brandt, whilst studying minerals that gave gave glass a deep blue colour he discovered a new metal, namely cobalt.Today cobalt’s uses vary from health and nutrition to industry. Cobalt is an essential metal, used in the production of alloys to make rechargeable batteries and catalysts. Cobalt is an essential trace element for the human body, an important component of vitamin B12 and plays an essential role in forming amino acids, proteins in nerve cells and in creating neurotransmitters.
Cobalt is an important component of B12. Image source: flickr: Healthnutrition
Cobalt and medicine
The salts found in cobalt can be used as a form of treatment for anaemia, as well as having an important role for athletes acting as an alternative to traditional blood doping. This metal enhances synthesis of erythropoietin, increasing the erythrocyte quantity in blood, and subsequently, improving aerobic performance.
Cobalt can enter the body via various ways: one way is by the skin. This organ is susceptible to environmental pollution, especially in workers who are employed in heavy industry.
When cobalt ions from different metal objects repeatedly come into contact with skin, these cobalt ions then diffuse through the skin, causing allergic and irritant reactions.
Important raw material for electric transport
Cobalt is also a critical raw material for electric transport. It is used in the production of the most common types of lithum-ion batteries, thus, powering the current boom in electric vehicles.
The electric vehicle industry has the potential to grow from 3.2 million in 2017 to around 130 million in 2030, seeing the demand for cobalt increase almost threefold within the next decade.
As the EU continues to develop the battery industry, it is becoming a priority for manufacturing industries to secure adequate cobalt supplies. The electric vehicle boom means cobalt will increase in demand in the EU as well as globally; further projects to monitoring the supply-and-demand situation will be announced.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on iron and its importance for human health.
Iron’s biological role
Iron is an important component of hemoglobin, a protein in the red blood cells which transports oxygen throughout the body. If there is a low level of iron in your body, your body will be unable to carry healthy oxygen-carrying red blood cells and a lack of these red blood cells can result in iron deficiency anemia.
During the 17th century, iron had early medicinal uses by Egyptians, Greeks, Hindus and Romanians, and around 1932, it became established that iron was essential for haemoglobin synthesis.
Red blood cells
The World Health Organisation (WHO) released figures suggesting that iron deficiency is incredibly common in humans and therefore happens to be a primary cause of anaemia.
According to their statistics, around 1.62 bn cases of anaemia are caused by iron deficiency and according to WHO’s 2008 reports, anaemia can be caused by excessive blood loss, poor iron absorption, and low dietary intake of iron.
Iron bioavailability in food is low among populations consuming plant-based diets. Iron requirement is very important, and when low levels of iron deficiency are prominent among populations in developing countries, subsequent behavioural and health consequences follow.
These include reduced fertility rates, fatigue, decreased productivity and impaired school performance among children.
During pregnancy, iron utilisation is increased as it is essential nourishment for the developing fetus. In 1997, a study proved that pregnant women needed the increase in iron, as 51% of pregnant women suffered from anaemia, which is twice as many non-pregnant women.
As iron is a redox-active transitional metal, it can form free radicals and in excessive amounts. This is dangerous as it can cause oxidative stress which could lead to tissue damage. Epidemiological studies provide evidence to show that excessive iron can be a potent risk factor associated with chronic conditions like cardiovascular and developing metabolic abnormalities.
Dietary iron is found in two basic forms. It is found from animal sources (as haem iron) or in the form of plant sources (as non-haem iron). The most bioavailable form of iron is from animal sources, and iron from plant sources are predominantly found in cereals, vegetables, pulses, beans, nuts and fruit.
However, this form of iron is affected by various factors, as the phytate and calcium can bind iron in the intestine, unfortunately reducing absorption. Vitamin C which is present in fruit and vegetables can aid the absorption of non-haem iron when it is eaten with meat.
‘The global burden of iron deficiency anaemia hasn’t changed in the past 20 years, particularly in children and women of reproductive age,’ says researcher, Dora Pereira. Although iron is an important nutrient to keeping healthy, it is imperative that iron levels are not too high.
The banana colour scheme distinguishes seven stages from ‘All green’ to ‘All yellow with brown flecks’. The green, unripe banana peel contains leucocyanidin, a flavonoid that induces cell proliferation, accelerating the healing of skin wounds. But once it is yellowish and ready to eat, the chlorophyll breaks down, leaving the recognisable yellow colour of carotenoids.
Unripe (green) and ready-to-eat (yellow) bananas.
The fruits are cut from the plant whilst green and on average, 10-30 % of the bananas do not meet quality standards at harvest. Then they are packaged and kept in cold temperatures to reduce enzymatic processes, such as respiration and ethylene production.
However, below 14°C bananas experience ‘chilling injury’ which changes fruit ripening physiology and can lead to the brown speckles on the skin. Above 24°C, bananas also stop developing fully yellow colour as they retain high levels of chlorophyll.
Once the green bananas arrive at the ripening facility, the fruits are kept in ripening rooms where the temperature and humidity are kept constant while the amount of oxygen, carbon dioxide and ethene are controlled.
The gas itself triggers the ripening process, leads to cell walls breakdown and the conversion of starches to sugars. Certain fruits around bananas can ripen quicker because of their ethene production.
By day five, bananas should be in stage 2½ (’Green with trace of yellow’ to ‘More green than yellow’) according to the colour scale and are shipped to the shops. From stage 5 (’All yellow with green tip’), the fruits are ready to be eaten and have a three-day shelf-life.
A fruit market. Image: Gidon Pico
The very short shelf-life of the fruit makes it a very wasteful system. By day five, the sugar content and pH value are ideal for yeasts and moulds. Bananas not only start turning brown and mouldy, but they also go through a 1.5-4 mm ‘weight loss’ as the water is lost from the peel.
While scientists have been trying out different chemical and natural lipid ‘dips’ for bananas to extend their shelf-life, such methods remain one of the greatest challenges to the industry.
In fruit salads, to stop the banana slices go brown, the cut fruits are sprayed with a mixture of citric acid and amino acid to keep them yellow and firm without affecting the taste.
Bananas are a good source of potassium and vitamins.
The high starch concentration – over 70% of dry weight – banana processing into flour and starch is now also getting the attention of the industry. There are a great many pharmaceutical properties of bananas as well, such as high dopamine levels in the peel and high amounts of beta-carotene, a precursor of vitamin A.
Whilst the ‘seven shades of yellow’ underpin the marketability of bananas, these plants are also now threatened by the fungal Panama disease. This vascular wilt disease led to the collapse of the banana industry in the 1950’s which was overcome by a new variety of bananas.
However, the uncontrollable disease has evolved to infect Cavendish bananas and has been rapidly spreading from Australia, China to India, the Middle East and Africa.
The future of the banana industry relies on strict quarantine procedures to limit further spread of the disease to Latin America, integrated crop management and continuous development of banana ‘dips’ for extending shelf-life.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog focuses on silicon’s positive effects on the body.
Silicon was not originally regarded as an important element for human health, as it was seen to have a larger presence in (other) animal and plant tissue. It was not until a 2002 ‘The American Journal of Clinical Nutrition’ paper that surmised that accumulating research found that silicon plays an important role in bone formation in humans.
Silicon was first known to ‘wash’ through biology with no toxological or biological properties. However, in the 1970s, animal studies provided evidence to suggest that silicon deficiency in diets produced defects in connective and skeletal tissues. Ongoing research has added to these findings, demonstrating the link between dietary silicon and bone health.
Silicon plays an important role in protecting humans against many diseases. Silicon is an important trace mineral essential for strengthening joints. Additionally, silicon is thought to help heal and repair fractures.
The most important source of exposure to silicon is your diet. According to two epidemiological studies (Int J Endocrinol. 2013: 316783 ; J Nutr Health Aging. 2007 Mar-Apr; 11(2): 99–110) conducted, dietary silicon intake has been linked to higher bone mineral density.
Silicon is needed to repair tissue, as it is important for collagen synthesis – the most abundant protein in connective tissue in the body – which is needed for the strengthening of bones.
However, silicon is very common in the body and therefore it is difficult to prove how essential it is to this process when symptoms of deficiency vary among patients.
There has also been a plausible link between Alzheimer’s disease and human exposure to aluminium. Research has been underway to test whether silicon-rich mineral waters can be used to reduce the body burden of aluminium in individuals with Alzheimer’s disease.
However, longer term study is needed to prove the aluminium hypothesis of Alzheimer’s disease.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about the various uses of nitrogen.
Nitrogen – an imperative part of DNA
The polymer that makes up the genetic code of is a sequence of nitrogen bases laid out on a backbone of sugar and phosphate molecules and blended into a double helix.
The nitrogen bases are translated into proteins and enzymes which regulate most our system’s biochemical reactions.
RDX is a nitrogen explosive. This means its explosive properties are primarily caused by the presence of many nitrogen–nitrogen bonds, which are extremely unstable, especially as nitrogen atoms want to come together to produce nitrogen gas due to the triple bond.
Ultimately, the more nitrogen–nitrogen bonds a molecule has, the more explosive it is. RDX is normally combined with other chemicals to make it less sensitive or less likely to explode.
One of the most powerful explosive chemicals is PETN, containing nitro groups and nitroglycerin in dynamite. Despite its powerful explosions, the chemical rarely will detonate alone. PETN was used frequently during World War II, whereby PETN was used to create exploding bridgewire detonators, using electric currents for detonations.
Among the least stable explosives is aziroazide azide, with 14 unstable nitrogen bonds, most of them bonded into unstable nitrogen–nitrogen bonds. Touching or handling this chemical can cause it to detonate, making it one of the most dangerous non-nuclear chemicals.
Nitrogen and plants
Nitrogen plays a significant role for plants to keep healthy. Plants usually contain 3-4% nitrogen in their above-ground tissues. Nitrogen is a major component of chlorophyll which plants use to capture sunlight energy to produce sugars, and a major component of amino acids, which are the building blocks of life.
Overall, nitrogen is a significant component to DNA, a key nutrient to plants, and the uses of nitrogen in everyday life span across various chemical industries including the production of fertilisers and explosives.
Today, most rockets are fueled by hydrazine, a toxic and hazardous chemical comprised of nitrogen and hydrogen. Those who work with it must be kitted up in protective clothing. Even so, around 12,000t of hydrazine is released into the atmosphere every year by the aerospace industry
Now, researchers are in the process of developing a greener, safer rocket fuel based on metal organic frameworks (MOFs), a porous solid material made up of clusters of metal ions joined by an organic linker molecule. Hundreds of millions of connections join in a modular structure.
Robin Rogers, formerly at McGill University, US, has worked with the US Air Force on hypergolic liquids that will burn when placed in contact with oxidisers, to try get rid of hydrazine. He teamed up with Tomislav Friščić at McGill who has developed ways to react chemicals ‘mechanochemically’ – without the use of toxic solvents.
The pair were interested in a common class of MOFs called zeolitic imidazole frameworks, or ZIFs, which show high thermal stability and are usually not thought of as energetic materials.
They discussed the potential of using ZIFs with the imidazolate linkers containing trigger groups. These trigger groups allowed them to take advantage of the usually not accessible energetic content of these MOFs.
The resulting ZIF is safe and does not explode, and it does not ignite unless placed in contact with certain oxidising materials, such as nitric acid, in this case.
Authorities continue to use hydrazine because it could cost millions of dollars to requalify new rocket fuels, says Rogers. MOF fuel would not work in current rocket engines, so he and Friščić would like to get funding or collaborate with another company to build a small prototype engine that can use it.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about the importance of potassium in human health.
Potassium plays an essential role to health, being the third most important mineral in the body. The human body requires at least 1000mg of potassium a day in order to support key bodily processes.
Potassium regulates fluid balance in the body, controls the electrical activity of the heart, muscles, and helps in activating nerve impulses throughout the nervous system.
According to an article from Medical News Today Knowledge Center, the possible health benefits to a regular diet intake of potassium include maintaining the balance of acids and bases in the body, supporting blood pressure, improving cardiovascular health, and helping with bone and muscle strength.
These powerful health benefits are linked to a potassium rich diet. Potassium is present in all fruits, vegetables, meat and fish.
Receptors on a cell membrane.
Can it go wrong?
The body maintains the potassium level in the blood. If the potassium level is too high in the body (hyperkalemia) or if it is too low (hypokalemia) then this can cause serious health consequences, including an abnormal heart rhythm or even a cardiac arrest.
Fortunately, cells in the body store a large reservoir of potassium which can be released to maintain a constant level of potassium in blood.
What is hyperkalemia? Video: Osmosis
Potassium deficiency leads to fatigue, weakness and constipation. Within muscle cells, potassium would normally send signals from the brain that stimulate contractions. However, if potassium levels steep too low, the brain is not able to relay these signals from the brain to the muscles, the results end in more prolonged contractions which includes muscle cramping.
As potassium is an essential mineral carrying out wide ranging roles in the body, the low intakes can lead to an increase in illness. The FDA has made a health claim, stating that ‘diets containing foods that are a good source of potassium and that are low in sodium may reduce the risk of high blood pressure and stroke.’
This suggests that consuming more potassium might reduce the risks of high blood pressure and the possibility of strokes. However, more research on dietary and supplemental potassium is required before drawing towards a set conclusion.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today, we investigate the uses of platinum.
Around 1200BC, archaeologists discovered traces of platinum in gold in ancient Egyptian burials.
However, the extent of Egyptians’ knowledge of the metal remains unknown, which suggests that Egyptians might have been unaware that platinum existed in the gold.
The Ancient Egyptians made elaborate masks for royals to wear once they were mummified.
Platinum was also used by South Americans with dates going back 2000 years. Burial goods show that in the pacific coast of South America, people were able to work platinum, producing artifacts of a white gold-platinum alloy.
Archaeologists link the South American tradition of platinum-working with the La Tolita Culture. Archaeological sites show the highly artistic nature of this culture, with the artifacts characterised by gold and platinum jewellery, and anthropomorphic masks symbolising the hierarchical and ritualistic society.
What are its properties?
Platinum is a silvery white metal, also known as ‘white gold’. It is extremely resistant to tarnishing and corrosion and it is one of the least reactive metals, unaffected by water and air, which means it will not oxidise with air.
It is also very soft and malleable, and therefore can be shaped easily and due to its ductility, it can be easily stretched into wire.
Platinum is a member of group 10 of the periodic table. The group 10 metals have several uses including decorative purposes, electrical components, catalysts in a variety of chemical reactions and play an important role in biochemistry, particularly platinum compounds which have widely been used as anticancer drugs.
Additionally, platinum’s tarnish resistance characteristics makes it one the most well-suited elements for making jewelry.
Platinum bonds are often used as a form of medicine in treatments for cancer. However, the health effects of platinum are dependent on the kinds of bonds that are formed, levels of exposure, and the immunity of the individual.
In 1844, Michele Peyrone, an Italian chemist, discovered the anti-neo plastic properties (apparently prohibiting the development of tumours) and later in 1971, the first human cancer patient was treated with drugs containing platinum.
Today, approximately 50% of patient are treated using medicine which includes the rare metal. Scientists will look further into all the ways platinum drugs affect biology, and how to design better platinum drugs in the future.
In an era of glass and steel construction, wood may seem old-school. But researchers are currently saying its time to give timber a makeover and bring to use a material that is able to store and release heat.
Transparent wood could be the construction material of choice for eco-friendly houses of the future, after researchers have now created an even more energy efficient version that not only transmits light but also absorbs and releases heat, potentially saving on energy bills.
Researchers from KTH Royal Institute of Technology in Stockholm reported in 2019 that they would add polymer polyethylene glycol (PEG) to the formulation to stabilise the wood.
PEG can go really deep into the wood cells and store and release heat. Known as a phase change material, PEG is a solid that melts at 80°F – storing energy in the process. This process reverses at night when the PEG re-solidifies, turning the window glass opaque and releasing heat to maintain a constant temperature in the house.
Transparent wood for windows and green architecture. Video: Wise Wanderer
In principle, a whole house could be made from the wooden window glass, which is due to the property of PEG. The windows could be adapted for different climates by simply tailoring the molecular weight of the PEG, to raise or lower its melting temperature depending on the location.
Clean and fresh water is essential for human life, and water is a necessity to agricultural and other industries. However, global population growth and pollution from industrial waste has put a strain in local fresh water resources.
A hydrogel is made up of polymer chains that are hydrophilic (attracted to water) and are known for being highly absorbent.
Current clean-up costs can be extremely expensive, leaving poorer and more remote populations at risk to exposure of metal pollutants such as lead, mercury, cadmium and copper, which can lead to severe effects on the neurological, reproductive and immune systems.
Now, a group of scientists at the University of Texas at Dallas, US, have developed a 3D printable hydrogel that is capable of 95% metal removal within 30 minutes.
Clean water is also needed for one’s hygiene, including brushing your teeth and bathing.
The hydrogel is made from a cheap, abundant biopolymer chitosan and diacrylated pluronic, which forms cDAP. The cDAP mixture is then loaded into the printer as a liquid and allowed to cool to <4⁰C, before rising again to room temperature to form a gel that can be used to produce various 3D printed shapes.
The Dallas team also tested the reusability of their hydrogel and found that it had a recovery rate of 98% after five cycles of use, proving it to be a potentially reliable resource to communities with limited fresh water supply.
Life without clean water. Video: charitywater
‘This novel and cost-effective approach to remove health and environmental hazards could be useful for fabricating cheap and safe water filtration devices on site in polluted areas without the need for industrial scale manufacturing tools,’ the paper reads.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today we look at copper and some of its popular uses.
A brief history
Copper was one of the first metals ever extracted and used by humans. According to the US Geological Survey, copper ranks as the third most consumed industrial metal in the world, dating back to around 5000BC.
Around 5500BC, early ancestors discovered the malleable properties of copper, and discovered they could be fashioned into tools and weapons – a discovery that allowed humans to emerge out of the stone age and drift into the age of metals.
Volcanic rocks in Tenerife, Spain.
Approximately two-thirds of the Earth’s copper is found in volcanic rocks, while approximately one-quarter occurs in sedimentary rocks.
Th metal is malleable, meaning it can conduct heat and electricity, making copper an extremely useful industrial metal and is used to make electronics, cables and wiring.
What is it used for?
Since 4500BC humans have made and manufactured items from copper. Copper is used mostly as a pure metal, but its strength and hardness can be adjusted by adding tin to create a copper alloy known as bronze.
In the 1700s, pennies were made from pure copper; in the 1800s they were made from bronze; and today, pennies consist of approximately 97.5% zinc and 2.5% copper.
Copper is utilised for a variety of industrial purposes. In addition to copper’s good thermal and electric conductivity, copper now plays an important role in renewable energy systems.
As copper is an excellent conductor of heat and electricity, power systems use copper to generate and transmit energy with high efficiency and minimal environmental impacts.
E. Coli cultures on a Petri dish.
Copper plays an important role as an anti-bacterial material. Copper alloy surfaces have properties which are set out to destroy a wide range of microorganisms.
Recent studies have shown that copper alloy surfaces kill over 99.9% of E.coli microbes within two hours. In the interest of public health, especially in healthcare environments, studies led by the Environmental Protection Agency (EPA) have listed 274 different copper alloys as certified antimicrobial materials, making copper the first solid surfaced material to have been registered by the EPA.
Copper has always maintained an important role in modern society with a vast list of extensive uses. With further development of renewable energy systems and electric vehicles, we will likely see an ongoing increase in demand for copper.
Throughout the series, you will be introduced to its members through regular features that highlight their roles and major interests in energy. We welcome you to read their series and hope to spark some interesting conversation across all areas of SCI.
The burning of fossil fuels is the biggest contributor to global greenhouse gas emissions.
According to the National Oceanic and Atmospheric Administration (NOAA), by the end of 2018, their observatory at Muana Loa, Hawaii, recorded the fourth-highest annual growth of global CO2 emissions the world has seen in the last 60 years.
Adding even more concern, the Met Office confirmed that this trend is likely to continue and that the annual rise in 2019 could potentially be larger than that seen in the previous two years.
Forecast global CO2 concentration against previous years. Source: Met Office and contains public sector information licensed under the Open Government Licence v1.0.
Large concentrations of CO2 in the atmosphere are a major concern because it is a greenhouse gas. Greenhouse gases absorb infrared radiation from solar energy from the sun and less is emitted back into space. Because the influx of radiation is greater than the outflux, the globe is warmed as a consequence.
Although CO2 emissions can occur naturally through biological processes, the biggest contributor to said emissions is human activities, such as fossil fuel burning and cement production.
Increase of CO2 emissions before and after the Industrial Era. Source: IPCC, AR5 Synthesis Report: Climate Change 2014, Fig. 1.05-01, Page. 3
Weather impacts from climate change include drought and flooding, as well as a noticeable increase in natural disasters.
This warming has resulted in changes to our climate system which has created severe weather impacts that increase human vulnerability. One example of this is the European heat wave and drought which struck in 2003.
The event resulted in an estimated death toll of over 30,000 lives and is recognised as one of the top 10 deadliest natural disasters across Europe within the last century.
In 2015, in an attempt to address this issue, 195 nations from across the globe united to adopt the Paris Agreement which seeks to maintain a global temperature rise of well below 2C, with efforts to limit it even further to 1.5C.
The Paris Climate Change Agreement explained. Video: The Daily Conversation
In their latest special report, the Intergovernmental Panel on Climate Change (IPCC) explained that this would require significant changes in energy, land, infrastructure and industrial systems, all within a rapid timeframe.
In addition, the recently published Emissions Gap report urged that it is crucial that global emissions peak by 2020 if we are to succeed in meeting this ambitious target.
Are we further away then we think?
As well as the Paris Agreement, the UK is committed to the Climate Change Act (2008) which seeks to reduce greenhouse gas emissions by at least 80% by 2050 relative to 1990 baseline levels. Since 1990, the UK has cut emissions by over 40%, while the economy has grown by 72%.
To ensure that we meet our 2050 target, the government has implemented Carbon Budgets, which limit the legal emissions of greenhouse gases within the UK across a five-year period. Currently, these budgets run up to 2032 and the UK is now in the third budget period (2018-2022).
The UK has committed to end the sale of all new petrol and diesel cars by 2040.
At present, the UK is on track to outperform both the second and third budget. However, it is not on track to achieve the fourth budget target (2023-2027). To be able to meet this, the Committee on Climate Change (CCC) urge that UK emissions must be reduced annually by at least 3% from this point forward.
We may not be sure which technologies will allow such great emission reductions, but one thing is for certain – decarbonisation is essential, and it must happen now!
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today we look at arsenic and some of its effects.
What is arsenic?
Arsenic is a chemical element found in nature – low levels of arsenic are found in water, air and soil – in man-made products. As arsenic is distributed throughout the environment, people have high exposure to elevated levels of inorganic arsenic through contaminated drinking water, as well as exposure to arsenic through oceans, food and insecticides.
Is arsenic harmful?
Arsenic can occur in an organic and inorganic form. Organic arsenic compounds are less harmful to our health, whereas, inorganic arsenic compounds (e.g those found in water) are carcinogens, which are highly toxic and dangerous. Arsenic contamination of groundwater has led to arsenic poisoning which affects the skin, liver, lungs and kidneys.
Prominently, arsenic has attracted much attention in Bangladesh, as 21.4% of all the deaths in a highly affected area were caused by levels of arsenic surpassing WHO’s provisional guideline value of 10 μg/L.
Long-term exposure to low doses of arsenic can cause a negative interference in the way cells communicate, which may minimise their ability to function, subsequently playing a role in the development of disease and causing an increase in health risks.
For example, cells use phosphate to communicate with other cells, but arsenate, which is one form of arsenic, can replace and imitate phosphate in the cell. This damages cells so they can not generate energy and impairs the ability of cells to communicate.
The health risks of arsenic in drinking water. Video: EnviroHealthBerkeley
Symptoms of arsenic poisoning can be acute, severe or chronic depending on the period of exposure and method of exposure. Symptoms may include vomiting, abdominal pain and diarrhoea, and long-term exposure can lead to cancers of the bladder and lungs.
Certain industries may face exposure to arsenic’s toxicity, but the maximum exposure to arsenic allowed is limited to 10 micrograms per cubic metre of air for every 8-hour shift. These industries include glass production, smelting, wood treatment, and the use of pesticides. Traces of arsenic can also be found in tobacco, posing a risk to people who smoke cigarettes and other tobacco products.
A global threat
Arsenic is naturally found in the Earth’s crust and can easily contaminate water and food.
WHO has ranked arsenic as one of the top 10 chemicals posing a huge threat to public health. WHO is working to reduce arsenic exposure, however, assessing the dangers on health from arsenic is not straightforward.
As symptoms and signs caused by long-term exposure to inorganic arsenic varies across population groups, geographical regions, as well as between individuals, there is no universal definition of the disease caused by this element. However, continuous efforts and measures are being made to keep concentrations as low as possible.
Scientists are closer to developing 3D printed artificial tissues that could help heal bones and cartilage, specifically those damaged in sports-related injuries. Scaffolds for the tissues have been successfully engineered.
Small injuries to osteochondral tissue – a hard bone that sits beneath a layer of cartilage that appears smooth – can be extremely painful and heal slowly. These injuries are very common in athletes and can stop their careers in their tracks. Osteochondral tissue can also lead to arthritis over time.
These types of injuries are commonly seen in athletes.
As osteochondral tissue is somewhere between bone and cartilage, and is quite porous and very difficult to reproduce. But now, bioengineering researchers at Rice University, Texas, US, have used 3D printing techniques to develop a material that may be be suitable in future for medical use.
A porous scaffold, with custom polymer mixes for cartilage and ceramic for bone, was engineered. The imbedded pores allow cells and blood vessels from the patient to infiltrate, integrating the scaffold into the natural bone and cartilage.
‘For the most part, the composition will be the same from patient to patient,’ said Sean Bittner, graduate student at Rice University and lead author of the study.
The aerogel could be used to coat spacecrafts due to its resilience to certain conditions.
The aerogel comprises a network of tiny air pockets, with each pocket separated by two atomically thin layers of hexagonal boron nitride. It’s at least 99% space. To build the aerogel, Duan’s team used a graphene template coated with borazine, which forms crystalline boron nitride when heated. When the graphene template oxidises, this leaves a ‘double-pane’ boron nitride structure.
The basis of the newly developed aerogel is the 2D structure of graphene.
‘The key to the durability of our new ceramic aerogel is its unique architecture,’ says study co-author Xiangfeng Duan of the University of California, US.
‘The “double-pane” ceramic barrier makes it difficult for heat to transfer from one air bubble to another, or to spread through the material by traveling along the hexagonal boron nitride layers themselves, because that would require following long, circuitous routes.’
How does Aerogel technology work? Video: Outdoor Research
Unlike other ceramic aerogels, the material doesn’t become brittle under extreme conditions. The new aerogel withstood 500 cycles of rapid heating and cooling from -198°C to 900°C, as well as 1400°C for one week. A piece of the insulator shielded a flower held over a 500°C flame.
Improvements in robots and robotic technologies has fuelled huge advancements across many industries in recent years. The UK Industrial Strategy has several Sector Deals in which robotic innovations play a role, particularly in Artificial Intelligence (AI), Life Sciences and Nuclear.
Innovative robotics have a place in all industries to improve efficiency and processes, however, in industries where radioactive materials are commonly used, using robots can help to manage risk. This could be by limiting exposure of employees to radioactive substances or preventing potential accidents.
In the UK, legislation exists as to how much exposure to ionising radiation employees may have each year – an adult employee is classified, and therefore must be monitored, if they receive an effective dose of greater than 6mSv per year. The average adult in the UK receives 2.7 mSv of radiation per year.
Snake-like robot is used to dismantle nuclear facilities. Video: Tech Insider
Through using robots, very few professionals in the chemical industry come close to this limit, and are subsequently safe from long-term health effects, such as skin burns, radiation sickness and cancer.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today we look at mercury and some of its reactions.
Mercury is a silver, heavy, liquid metal. Though mercury is a liquid at room temperature, as a solid it is very soft. Mercury has a variety of uses, mainly in thermometers or as an alloy for tooth fillings.
Mercury & Aluminium
Mercury is added directly to aluminium after the oxide layer is removed. Source: NileRed
The reaction between mercury and aluminium forms an amalgam (alloy of mercury). The aluminium’s oxide layer is disturbed When the amalgam forms, in the following reaction:
Al+ Hg → Al.Hg
Some of the Al.Mg get’s dissolved in the mercury. The aluminium from the amalgam then reacts with the air to form white aluminium oxide fibres, which grow out of the solid metal.
Mercury & Bromine
Mercury and bromine are the only two elements that are liquid at room temperature on the periodic table. Source: Gooferking Science
When mercury and bromine are added together they form mercury(I) bromide in the following reaction:
Hg2 + Br2 → Hg2Br2
This reaction is unique as mercury can form a metal-metal covalent bond, giving mercury(I) bromide a structure of Br-Hg-Hg-Br
Making the Pharaoh's Serpent by igniting mercury (II) thiocyanate. Source: NileRed
The first step of this reaction is to generate water-soluble mercury (II) nitrate by combining mercury and concentrate nitric acid. The reaction goes as follows:
Hg + 4NO3 → Hg(NO3)2 + 2H2O + 2NO2
Next, the reaction is boiled to remove excess NO2 and convert mercury(I) nitrate by-product to mercury (II) nitrate. The mixture is them washed with water and potassium thiocyanate added to the mercury (II) nitrate:
Hg(NO3)2 + 2KSCN→ Hg(SCN)2 + 2KNO3
The mercury (II) thiocyanate appears as a white solid. After this is dried, it can be ignited to produce the Pharaoh’s serpent, as it is converted to mercury sulfide in the following reaction:
Hg(SCN)2 → 2HgS + CS2 + + C3N4
The result is the formation of a snake-like structure. Many of the final products of this process are highly toxic, so although this used to be used as a form of firework, it is no longer commercially available.
Though many reactions of mercury look like a lot of fun, mercury and many of it’s products is highly toxic - so don’t try these at home!
Cooking, cleaning and other routine household tasks generate significant quantities of volatile and particulate chemicals inside the average home, leading to indoor air quality levels on a par with a polluted major city, said a researcher from Colorado University Boulder, US.
Not only that but these chemicals, from products such as shampoo, perfume and cleaning solutions also find their way into the external environment, making up an even greater source of global atmospheric pollution than vehicles.
‘Homes have never been considered an important source of outdoor pollution and the moment is right to start exploring that,’ said Marina Vance, assistant professor of mechanical engineering at CU Boulder. ‘We wanted to know how do basic activities such as cooking and cleaning change the chemistry of a house?’
First Conclusions from the HOMEChem Experiment. Video: Home Performance
In 2018, Vance co-led the collaborative HOMEChem field campaign, which used advanced sensors and cameras to monitor the indoor air quality of a 112m2 manufactured home on the University of Texas Austin campus.
Over one month, Vance and her collaborators from a number of other US universities conducted a variety of activities, including cooking toast to a full thanksgiving dinner in the middle of the summer for 12 guests, as well as cleaning and similar tasks.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today we look at helium.
Helium was first discovered by French astronomer Jules Janssen in 1868 when observing the spectral lines of the Sun during a solar eclipse. He initially thought the unidentified line was sodium, later concluding it was an element in the sun unknown to Earth.
In March 1895, Sr William Ramsey, a Scottish chemist, isolated helium on Earth for the first time by treating a mineral called cleveite with mineral acids. He was initially looking for argon, but noticed his spectral lines matched that of Jules Janssen’s.
Helium was discovered when Jules Janssen was observing the solar eclipse spectra.
Helium is a colourless, non-toxic and inert gas. It is the second lightest and second most abundant element in the universe.
Helium is often used for cryogenic (cooling) purposes. Liquid helium has a temperature of -270°C or 4K, which is only 4°C above absolute zero. It is utilised for cooling super conducting magnets.
Helium is used to cool superconducting magnets used in MRI. Image: Pixabay
Super conducting magnets have applications in imaging such as nuclear magnetic resonance (NMR), used for analysing molecules, and magnetic resonance imaging (MRI), a medical imaging device. These techniques are important for scientific research and medical diagnostics.
Helium can also be used a pressurising gas for welding and growing silicon wafer crystals, or as a lifting gas for balloons and airships.
Helium is also used in airships and balloons. Image: Pixabay
A commonly known use of helium is to fill balloons often found at parties and events. When people breathe in the helium gas from these balloons, their voice changes.
As helium is much less dense that nitrogen and oxygen, the two gases that make up regular air, sound travels twice as fast through it. When you speak through helium, the timbre or tone of your voice is affected by this change, causing it to appear higher in pitch.
Why is helium so important? Video: SciShow
Unfortunately, helium is a non-renewable resource, and reserves are running out. There is currently no cheap way to create helium, so industries need to be vigilant when using it, and we may see less helium balloons in the future.
Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory, California, US, have designed a method in which semiconducting materials have been turned into quantum machines.
This work could revolutionise the field, and lead to new efficient electronic systems and exciting physics.
Quantum machines are generally made from two-dimensional (2D) materials – often graphene. These materials are one atom thick and can be stacked. When the materials form a repeating pattern, this can generate unique properties.
Studies with graphene have resulted in large advancements in the field of 2D materials. A new study has found a way to use two semiconducting materials – tungsten disulphide and tungsten diselenide – to develop a material with highly interacting electrons.
The researchers determined that the ‘twist angle’ – the angle between the two layers – provides the key to turning a 2D system into a quantum material.
Dr Gary Harris talks about radio technology to quantum materials. Source: TEDx Talks
‘This is an amazing discovery because we didn’t think of these semiconducting materials as strongly interacting,’ said Feng Wang, Professor of Physics at UC Berkeley. ‘Now this work has brought these seemingly ordinary semiconductors into the quantum materials space.’
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is an element which gives us life, oxygen.
Oxygen is a group 5 gas that is found abundantly in nature. Of the air we breathe, 20.8% is oxygen in its elemental, diatomic form of O2. Oxygen is also one of the most abundant elements in nature, and along with carbon, hydrogen and nitrogen, makes up the structures of most of the natural world. Oxygen can be found in DNA, sugar, hormones, proteins and so many more natural structures.
Although oxygen mainly exists as a colourless gas, at -183°C it can be condensed as a pale blue liquid. Oxygen may seem unsuspecting, but it is highly reactive and highly oxidising. A common example of this reactivity is how oxygen reacts with iron to produce iron oxide, which appears as rust.
Oxygen molecules are paramagnetic – they exhibit magnetic characteristics when in the presence of a magnetic field. Liquid oxygen is so magnetic that the effect can be seen by suspending it between the poles of a powerful magnet.
Oxygen gas has applications for medicine and space travel in breathing apparatus.
Oxygen can be found as ozone or O3. Ozone is a pale blue gas and has a distinctive smell. It is not as stable as diatomic oxygen (dioxygen) and is formed when ultraviolet light (UV) and electrical charges interact with O2.
The highest concentration of ozone can be found in the Earth’s stratosphere, which absorbs the Sun’s UV radiation, providing natural protection for planet Earth.
Ozone (O3) is most concentrated in the stratosphere. Image: Pixabay
Ozone can be used industrially as a powerful oxidising agent. Unfortunately, it can be a dangerous respiratory hazard and pollutant so much be used with care.
Water consists of an oxygen atom and two hydrogen atoms. Though this may seem remarkably unassuming, this combination gives water unique properties that are crucial to it’s functions in the natural world.
Water can form hydrogen bonds between the slightly positive hydrogen and the slightly negative oxygen. These hydrogen bonds, along with waters other practical properties, make water useful in nature.
Without the hydrogen bonding found in water, plants could not transpire – transport water through their phloem’s against gravity. The surface tension of water provides stability for many natural structures.
Oxygen plays a key role in nature, including in water molecules. Image: Pixabay
Oxygen plays a key role in nature, from the ozone layer that encapsulates our planet, to our DNA. It’s combination with hydrogen in water makes a molecule which is integral to the natural world, and both water and oxygen itself are pivotal to our existence the planet.
For British Science Week 2019, we are looking back at how Great Britain has shaped different scientific fields through its research and innovation. British scientists, engineers and inventors have played a significant role in developing engines and the automotive industry that stemmed from them.
Before the internal combustion engine, steam power was revolutionary in progressing industry in Britain.
The first practical steam engine was designed by English inventor Thomas Newcomen in 1712 and was later adapted by Scotsman James Watt in 1765. Watt’s steam engine was the first to make use of steam at an above atmospheric pressure.
The Steam Engine - How Does It Work? Video: Real Engineering
In 1804, the first locomotive-hauled railway journey was made by a steam locomotive design by Richard Trevithick, an inventor and mining engineer from Cornwall, UK.
After this, steam trains took off and the steam engine was used in many ways such as powering the SS Great Britain, designed by Isambard Kingdom Brunel and launched in 1843.
The SS Great Britain in Bristol, UK, today.
Engines at the ready
The conception and refinement of the internal combustion engine involved many inventors from around the world, including British ones.
The automobile, using the internal combustion engine, was been invented in the United States, and Britain picked up on this emerging industry very quickly. These brands are among the most famous and abundant cars on the road today; Aston Martin, Mini, Jaguar, Land Rover and Rolls Royce may come to mind.
By the 1950s, the UK was the second-largest manufacturer of cars in the world (after the United States) and the largest exporter.
In 1930, the jet engine was patented by Sr Frank Whittle. He was an aviation engineer and pilot who started his career as an apprentice in the Royal Air Force (RAF). The jet engine became critical after the outbreak of World War II.
Great Britain are still major players in the aviation industry, and engineering innovations continue to be a major part of the British economy. British inventors have gone on to invent the hovercraft, hundreds of different jet designs and a variety of military vehicles.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today, on International Women’s Day, we look at the two elements radium and polonium and the part Marie Curie that played in their discovery.
Who is Marie Curie?
Marie Sklodowska and her future husband Pierre Curie.
Marie Sklodowska-Curie was born in 1867 in Poland. As a young woman she had a strong preference for science and mathematics, so in 1891 she moved to Paris, France, and began her studies in physics, chemistry and mathematics at the University of Paris.
After gaining a degree in physics, Curie began working on her second degree whilst working in an industrial laboratory. As her scientific career progressed, she met her future husband, Pierre Curie, whilst looking for larger laboratory space. The two bonded over their love of science, and went on to marry, have two children and discover two elements together.
After finishing her thesis on ‘Studies in radioactivity’, Curie became the first woman to win a Nobel Prize, the first and only woman to win twice, and the only person to win in two different sciences.
Curie, along with husband Pierre and collaborator Henri Becquerel, won the 1903 Nobel prize in Physics for their radioactivity studies, and the 1911 Nobel prize in Chemistry for the isolation and study of elements radium and polonium.
Curie won the Nobel prize twice in two different subjects. Image: Pixabay
As of 2018, Curie is one of only three women to have won the Nobel Prize in Physics and one of the five women to be awarded the Nobel Prize in Chemistry.
Polonium, like radium, is a rare and highly reactive metal with 33 isotopes, all of which are unstable. Polonium was named after Marie Curie’s home country of Poland and was discovered by Marie and Pierre Curie from uranium ore in 1898.
Polonium is not only radioactive but is highly toxic. It was the first element discovered by the Curies when they were investigating radioactivity. There are very few applications of polonium due to its toxicity, other than for educational or experimental purposes.
Radium is an alkaline earth metal which was discovered in the form of radium chloride by Marie and her husband Pierre in December 1898. They also extracted it from uranite (uranium ore), as they did with polonium. Later, in 1911, Marie Curie and André-Louis Debierne isolated the metal radium by electrolysing radium chloride.
The discovery of radium led to the development of modern cancer treatments, like radiotherapy.
Pure radium is a silvery-white metal, which has 33 known isotopes. All isotopes of radium are radioactive – some more than others. The common historical unit for radioactivity, the curie, is based on the radioactivity of Radium-226.
Famously, radium was historically used as self-luminescent paint on clock hands. Unfortunately, many of the workers that were responsible for handling the radium became ill – radium is treated by the body as calcium, where it is deposited in bones and causes damage because of its radioactivity. Safety laws were later introduced, followed by discontinuation of the use of radium paint in the 1960s.
Marie Curie: A life of sacrifice and achievement. Source: Biographics
Curie’s work was exceptional not only in its contributions to science, but in how women in science were perceived. She was an incredibly intelligent and hard-working woman who should be celebrated to this day.
Our SCI journal, Polymer International is celebrating it’s 50th publication year in 2019. Volume 1, Issue 1 of Polymer International was first published in January 1969 under the original name British Polymer Journal. The journal, published by Wiley, continues to publish high quality peer reviewed demonstrating innovation in the polymer field.
Today, we look at the five highest-cited Polymer International papers and their significance.
Article: A review of biodegradable polymers: uses, current developments in the synthesis and characterization of biodegradable polyesters, blends of biodegradable polymers and recent advances in biodegradation studies – Wendy Amass, Allan Amass and Brian Tighe. 47:2 (1998)
In the last few years, much of environmentalists’ focus has been on our plastic waste issue, particularly the issue of plastic build up in the oceans, and searching for alternatives. This review, published in 1998, was ahead of its time, describing biodegradable polymers and how they could help to solve our growing plastics problem. Research in this area continues to this day.
Here’s how much plastic trash Is littering the Earth. Video: National Geographic
The life of RAFT
Article: Living free radical polymerization with reversible addition – fragmentation chain transfer (the life of RAFT) – Graeme Moad, John Chiefari, (Bill) Y K Chong, Julia Krstina, Roshan T A Mayadunne, Almar Postma, Ezio Rizzardo and San H Thang. 49:9 (2000)
This research article by Moad et al., published in 2000, looks to answer questions about free radical polymerization with reversible addition-fragmentation chain transfer (RAFT polymerization). RAFT polymerization is a type of polymerization that can be used to design polymers with complex architectures including comb-like, star, brush polymers and cross-linked networks. These complex polymers have application in smart materials and biological applications.
Article: Main properties and current applications of some polysaccharides as biomaterials – Marguerite Rinaudo. 57:3 (2008)
Biomaterials made from sugar polymers have huge potential in the field of regenerative medicine
The review by Marguerite Rinaudo looks at polysaccharides – polymers made from sugars – and evaluates their potential in biomedical and pharmaceutical applications. They concluded that alginates, along with a few other named examples, were promising. Alginate-based biomaterials have since been used in the field of regenerative medicine, including would healing, bone regeneration and drug delivery, and have a potential application in tissue regeneration.
Article: Supramolecular polymer chemistry—scope and perspectives – Jean-Marie Lehn. 51:10 (2002)
This 2002 paper reviews advances in supramolecular polymers – uniquely complex structured polymers. They have a wide range of complex applications. Molecular self-assembly – the ability of these polymers to assemble into the correct structure without input – can be used to develop new materials. Supramolecular chemistry has also been applied in the fields of catalysis, drug delivery and data storage. Jean-Marie Lehn won the 1987 Nobel Prize in Chemistry for his work in supramolecular chemistry.
Article: Organic light‐emitting diode (OLED) technology: materials, devices and display technologies – Bernard Geffroy, Philippe le Roy and Christophe Prat. 55:6 (2006)
Organic light-emitting diode (OLED) technology could be used to make flexible screens and displays
This review looks at organic light-emitting diode (OLED) technology, which can be made from a variety of materials. When structured in a specific way, these materials can result in a device that combined in a specific red, green, blue colour combination, like standard LED builds, can form screens or displays. Because of the different structure of the material, these displays may have different properties to a standard LED display including flexibility.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about iodine and some of the exciting reactions it can do!
Iodine & Aluminium
Reaction between iodine and aluminum. These two components were mixed together, followed by a few drops of hot water. Source: FaceOfChemistry
Reactions between iodine and group 2 metals generally produce a metal iodide. The reaction that occurs is:
2Al(s) + 3I2(s) → Al2I6(s)
Freshly prepared aluminium iodide reacts vigorously with water, particularly if its hot, releasing fumes of hydrogen iodide. The purple colour is given by residual iodine vapours.
Iodine & Zinc
Zinc and iodine react similarly to aluminium and iodine. Source: koen2all
Zinc is another metal, and when it reacts with iodine it too forms a salt – zinc iodide. The reaction is as follows:
Zn + I2→ ZnI2
The reaction is highly exothermic, so we see sublimation of some of the iodide and purple vapours, as with the aluminium reaction. Zinc iodide has uses in industrial radiography and electron microscopy.
Iodine & Sodium
Iodine reacting with molten sodium gives an explosive reaction that resembles fireworks. Source: Bunsen Burns
As with the other two metals, sodium reacts violently with iodine, producing clouds of purple sublimated iodine vapour and sodium iodide. The reaction proceeds as follows:
Na + I2→ 2NaI
Sodium iodide is used as a food supplement and reactant in organic chemistry.
Iodine Clock reaction
The iodine clock reaction – a classic chemical clock used to study kinetics. Source: koen2all
The reaction starts by adding a solution of potassium iodide, sodium thiosuphate and starch to a mixture of hydrogen peroxide and sulphuric acid. A set of two reactions then occur.
First, in a slow reaction, iodine is produced:
H2O2 + 2I− + 2H+ → I2 + 2H2O
This is followed by a second fast reaction, where iodine is converted to iodide by the thiosulphate ion:
2S2O32− + I2 → S4O62− + 2I−
The reaction changes colour to a dark blue or black.
The elephant’s toothpaste reaction is a favourite for chemistry outreach events. Source: koen2all
In this fun reaction, hydrogen peroxide is decomposed into hydrogen and oxygen, and catalysed by potassium iodide. When this reaction is mixed with washing-up liquid, the oxygen and hydrogen gas that is produced creates bubbles and the ‘elephant’s toothpaste’ effect.
There are lot’s of fun reactions to be done with iodine and the other halogens (fluorine, bromine, chlorine).
Iodine’s sublimation to a bright purple vapour makes it’s reactions visually pleasing, and great fun for outreach events and science classes.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about sulphur, specifically sulphites and their significance to the wine industry.
Sulphites and wine - what is all the fuss about? Image: Pixabay
What is a sulphite?
Sulphites are compounds that contain the sulphite ion (sulphate (IV) or SO32- ). There a wide-range of compounds of this type, but common ones include sodium sulphite, potassium bisulphite and sulphur dioxide.
Sulphites are often added as preservatives to a variety of products, and help maintain shelf-life, freshness and taste of the food or drink. They can be found in wines, dried fruits, cold meats and other processed food. Some are produced naturally during wine-making however, they are mainly added in the fermentation process, protecting the wine from bacteria and oxidation.
Sneezing and wine
Sulphites have a bad reputation for causing adverse reactions, such as sneezing and other allergic symptoms. But are sulphites really allergens, or just another urban myth?
Despite it being one of the top nine listed food allergens, many experts believe that the reaction to sulphites in wine can be considered not a ‘true allergy’, rather a sensitivity. Symptoms only usually occur in wine-drinkers with underlying medical issues, such as respiratory problems and asthma, and do not include headaches.
Some people report sneezing and similar symptoms when drinking wine.
Sulphites are considered to be generally safe to eat, unless you test positive in a skin allergy test –some individuals, particularly those who are hyperallergic or aspirin-allergic, may have a true allergy to sulphites. Sufferers of a true allergy would not suffer very mild symptoms if they consumed sulphites, instead they would have to avoid all food with traces of sulphite.
Some scientists believe adverse reactions to red wine could be caused by increased levels of histamine. Fermented products, such as wine and aged cheese, have histamine present, and red wine has significantly more histamine than white wine. They suggest taking an anti-histamine around one hour before drinking to help reduce symptoms.
Despite it not being considered a true allergen, wine-makers must still label wine as containing sulphites. In 1987, a law was passed in the US requiring labels to be placed on wine containing a large amount of added sulphites. Similarly, in 2005, a European law was brought in to regulate European wine labelling. Sulphites are now often listed as a common allergen on bottle labels in wines that have over 10mg/l.
You can often find the words ‘contains sulphites’ on a wine bottle. Image: Pixabay
Many food and drink industries are producing products suitable for allergy sufferers, and winemakers have followed this trend by beginning to make sulphite-free wine. These are mainly dry red wines that contain high levels of tannins, which act as a natural preservative. Wines without added sulphites are generally labelled as organic or natural wines and have grown in popularity over the last few years, but unfortunately, many wine critics believe that these naturally preserved wines sacrifice on flavour and shelf life.
In summary, sulphites are a common preservative, not only found in wine, but a range of food, and do not generally cause allergic reactions. If you are an individual with a true sulphite allergy, you may want to try sulphite free wine – but you will have to compromise on shelf life!
3D printing technology is becoming increasingly common in research and industry, but its use is limited due to lack of availability of specialist inks that can be used to generate novel structures. In this study, scientists first made an ink from silicone microbeads, bound in liquid silicone and water. This mixture has a paste-like consistency, similar to household toothpaste, where it can be easily manipulated, but retains its shape and does not drip.
What is 3D Printing and how does it work? Video: Funk-e Studios
The ink was then fed into a 3D printer and used to create mesh patterns. The final structures are cured in an oven and contain embedded iron carbonyl particles, which allow the researchers to use magnetic fields to manipulate it.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about the highly reactive gas, fluorine.
Fluorine wasn’t discovered until the 19th century, and even now very few chemists have seen elemental fluorine. Fluorite – fluorine’s source mineral – was used industrially as far back as the 16th century, but elemental fluorine wasn’t made until much later.
Fluorite is the mineral form of calcium fluoride (CaF2) and can be found in a wide variety of colours – from pastel free, to burgundy, and even purple or golden yellow. Many samples of fluorite can also be seen fluorescing under UV light. Fluorite’s main industrial use is as a source of hydrogen fluoride (HF), a highly reactive acid. It can also be used to lower the melting point of raw materials, such as steel.
Fluorite has been used in industry for hundreds of years and is fluorescent under UV light. Image: Pixabay
In 1886, French chemist Henri Moissan first made elemental fluorine by electrolysing a mixture of potassium fluoride and hydrogen fluoride. He later won the Nobel Prize in Chemistry for his work.
Large-scale production of fluorine first began during World War II, where it was used to separate uranium for the Manhattan Project – the United States’ nuclear weapons development project.
Fluorine is known for its high reactivity. It is the most electronegative element, which means it can react with almost every other element in the periodic table. Despite being difficult to handle, fluorine and fluorine containing compounds have many real-world applications.
Due to its reactivity, elemental fluorine must be handled with great care. Fluorine reacts with water to produce hydrogen fluoride, which is such a powerful acid it can eat through glassware.
Fluorine’s reactivity isn’t all bad – in fact, it has hundreds of applications. One of the most common uses of fluorine is the fluorides in toothpaste.
These fluorides exist usually as tin or sodium fluoride, and when you brush your teeth they react with calcium in the enamel to make it less soluble to acids. This gives some protection to your teeth from acidic foods such as fizzy drinks or juices.
The fluorochemical industry began in the 1930′s and 40′s with DuPont, who commercialised organofluorine compounds on a large scale. They developed Freon-12 (dichlorodifluoromethane) after General Motors showed chlorofluorcarbons (CFCs) could be used as refrigerants. The two companies joined together to market Freon-12, which quickly replaced previously used toxic kitchen refrigerants.
CFCs were found to be creating holes in the ozone layer, contributing to global warming. Image: Pixabay
CFCs were later banned by a number of countries due to the damage they caused to the ozone layer. More environmentally friendly fluorine-based alternatives are now used in refrigeration, including hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs).
DuPont continued to pioneer the industry, when recently hired chemist Roy J Plunkett accidentally discovered polytetrafluoroethylene, also known as the polymer Teflon. Tests of the mysterious white polymer he had generated showed its’ high temperature stability and resistance against corrosion were significantly higher than any other plastic. It only took three years for large-scale production to begin.
Fluorine – Professor Martyn Poliakoff. Video: Periodic Videos
The development of Teflon lead to many other similar fluorine-containing polymers appearing on the market, including PTFE, which is used in breathable rainwear by the Gore-Tex business and was developed by Robert Gore, the son of ex-DuPont employee Bill Gore.
The fluorochemicals industry continues to grow to this day; in 2017 the global market was estimated at $17.6 billion.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about the exciting group one element, lithium!
Lithium has a wide range of uses – it can even power batteries!
Lithium was first discovered in mines in Australia and Chile, and was initially used to treat gout, an arthritic inflammatory condition. Its use as a psychiatric medication wasn’t established until 1949, when an Australian psychiatrist discovered the positive effect that lithium salts had on treating mania. Since then, scientists have discovered that lithium works as a mood stabiliser by targeting neurotransmitters in the brain.
Neurotransmitters are chemicals that are released by one neuron to send a message to the next neuron. There are several types found in humans including dopamine, serotonin and glutamate. Each has a different role, and different levels of each neurotransmitter can be linked to a variety of mental illnesses. However, it is an increase in glutamate – an excitatory neurotransmitter that plays a role in learning and memory – and has been linked to the manic phase of bipolar disorder.
Lithium salts have been used as a medication for mania effectively since 1949. Image: Pixabay
Lithium is thought to stabilise levels of glutamate, keeping it at a healthy and stable level. Though it isn’t a fully comprehensive treatment for bipolar disorder, lithium has an important role in treating the manic phase and helping researchers to understand the condition.
One of the most common types of battery you will find in modern electronics is the lithium ion battery. This battery type was first invented in the 1970s, using titanium (IV) sulphide and lithium metal. Although this battery had great potential, scientists struggled to make a rechargeable version.
Initial rechargeable batteries were dangerous, mainly due to the instability of the lithium metal. This resulted in them failing safety tests and led to the use of lithium ions instead.
Lithium-ion batteries are widely used and developments in the technology continue today.
Developments in lithium ion technology continue to this day, in which the recently-founded Faraday Institute plays a large role. As part of the Faraday Battery Challenge, they are bringing together expertise from universities and industry, supporting projects that develop lithium-based batteries, along with new battery technologies.
Nuclear fusion happens in a hollow steel donut surrounded by magnets. The large magnetic fields contain a charged gas known as plasma, which is heated to 100m Kelvin and leads to nuclear fusion of the deuterium and tritium in the plasma. Keeping the plasma stable and preventing it from cooling is one of the largest industrial problems to overcome. This is where lithium comes in.
Results from studies in which lithium is delivered in a liquid form to the edge of the plasma, show that lithium is stable and maintains its temperature and could potentially be used in controlling the plasma. It can also increase the plasma temperature if injected under certain conditions, improving the overall conditions for fusion.
Lithium has uses in plasma stabilisation in nuclear fusion. Video: Tedx Talks
Aside from its uses in nuclear fusion, lithium has other uses in the nuclear industry. For example, it is used as an additive in coolant systems. Lithium fluoride and other similar salts have a low vapour pressure, meaning they can carry more heat than the same amount of water.
Called Philyra, after the Greek goddess of fragrance, the AI programme developed two new fragrances for Brazilian beauty company O Boticário.
‘What she did was super innovative. She had a sweet warm background, but added cardamom-like Indian cuisine scents and a milk that came from the flavour department,’ says David Apel, Senior Perfumer with Symrise. ‘From 1.7m formulas, it is amazing for her to find something that hadn’t been done before.’
Using AI to create new fragrances. Video: IBM Research
In a demonstration at IBM Research in Zurich, Switzerland, computational researcher Richard Goodwin demonstrated how Philyra is able to scan 1,000 different formulations, and over 60 raw materials, and compare them with fragrances currently on the marketplace. It is possible to request a certain type of perfume and adjust its novelty.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about the first element in the periodic table, hydrogen!
Hydrogen isn’t just for keeping balloons afloat. Image: Pixabay
Hydrogen (H2) gas has many uses in modern engineering. Scientists are always searching for cheaper, more renewable fuel sources that have a lower negative impact on the environment. Hydrogen was frequently used to generate energy in the past, and this drive for more renewable energy has given hydrogen-derived fuel a new lease of life.
Hydrogen can be used in fuel cells. These act like batteries, generating their energy from a reaction between hydrogen and oxygen (O2). Hydrogen fuel cells have been incorporated into many modern technologies, including automotive. As the reaction occurring only generates heat, electricity and water, fuel cells are significantly better for the environment than many alternatives. Hydrogen is also much cheaper as a commodity that typical fuels.
Hydrogen fuel cells can now be used to power automotive vehicles, including cars!
Engineering cooling systems can use hydrogen. The gases physical properties make it 7-10 times better at cooling than air. It can also be easily detected by sensors. Because of this, hydrogen is used in cooling systems, which are generally smaller and less expensive than other available options.
Hydrogen gas can be used in reactions. The most famous reaction using hydrogen is the production of ammonia (NH3), also known as the Haber process. The Haber process was developed by Fritz Haber and Car Bosch in the early 20th century to fill the need to produce nitrogen-based fertilisers. In the Haber process, atmospheric nitrogen (N2) is reacted with H2 and a metal catalyst to produce NH3.
Nitrogen-based fertilisers are still used today, but ammonia was one of the first to be commercially produced.
Ammonia is a valuable fertilised, providing much needed nitrogen to plants. It was used on a variety of agricultural plants, including food crops wheat and maize, in the 19th and early 20th century.
Chemists undertake other chemical reactions, such as hydrogenation and reduction, that utilise hydrogen, to make commercially valuable products. Some physical properties of hydrogen make it tricky, and often dangerous, to use in industry. However, careful control of conditions allow for its safe use on larger scales.
Hydrogen gas can be explosive, making it often dangerous to use.
Producing hydrogen gas
There are many ways to produce gaseous hydrogen. The four main sources of commercially produced hydrogen are natural gas, oil, coal and electrolysis. To obtain gaseous hydrogen, the fossil fuels are ‘steam reformed’, a process which involves a reaction with steam at high pressure and temperature.
Electrolysis of water is another method that is used in hydrogen production. This method is 70-80% efficient. However, it often requires large amounts of energy, specifically in the form of heat. This heat can be sourced from waste heat produced by industrial plants.
So, whats all this hot air about hydrogen? Source: Tedx Talks
An alternative method for producing hydrogen is via biohydrogen. Hydrogen gas can be produced by certain types of algae. This process involves fermentation of glucose. Some hydrogen is also produced in a form of photosynthesis by cyanobacteria. This process can be used on an industrial scale.
Overall, hydrogen technology, whether it be new developments, such as hydrogen fueled cars, or old, like the Haber process, remains critical to the chemical industry.
Plant breeders are increasingly using techniques to produce new varieties they say are indistinguishable from those developed through traditional breeding methods. New genome editing technologies can introduce new traits more quickly and precisely.
However, in July, 2018, the European Court of Justice decreed they alter the genetic material of an organism in a way that does not occur naturally, so they should fall under the GMO Directive. This went against the opinion of the Advocate General.
In October 2018, leading scientists representing 85 European research institutions endorsed a position paper warning that the ruling could lead to a de facto ban of innovative crop breeding.
The paper argues for an urgent review of European legislation, and, in the short term, for crops with small DNA adaptations obtained through genome editing to fall under the regulations for classically bred varieties.
‘As European leaders in the field of plant sciences […] we are hindered by an outdated regulatory framework that is not in line with recent scientific evidence,’ says one of the signatories, Dirk Inzé, Scientific Director at Life Sciences Institute VIB in Belgium.
2019 has been declared by UNESCO as the Year of the Periodic Table. To celebrate, we are releasing a series of blogs about our favourite elements and their importance to the chemical industry. Today’s blog is about one of the most abundant and most used elements, carbon!
Carbon could be called the element of life – it can be found in every living creature on Earth in a variety of different forms, from the backbone of your DNA, to the taste receptors in your tongue and the hormones controlling your hunger. Carbon-based chemistry surrounds us – in the air we breathe, in the food we eat and in the soil beneath our feet.
So, why is carbon so important to life? Carbon’s chemistry allows it to form large, intricate 3D structures, which are the basis of its interaction in biology – like jigsaw pieces that come together to build a tree, an elephant or a human being.
The study of carbon-based chemistry, or organic chemistry, has allowed us to better understand our living world and the interactions that occur, leading to development of better tasting food, higher yielding crops and more efficient medicines to improve our health.
In the early 19th century, chemist Justus von Liebig began synthesising organic, carbon-based molecules and said: ‘The production of all organic substances no longer belongs just to living organisms.’
Since then, hundreds of organic compounds for medicinal use have been synthesised – from adrenaline to ibuprofen – and hundreds of unique synthesis pathways have been described.
Organic chemistry – the study of carbon-based chemistry – has given us hundreds of modern medicines.
Carbon in materials
Atoms of carbon can make four bonds, each with another carbon attached, to arrange themselves into different molecular structures and form completely different substances. These molecular structures, known as allotropes, can result in vast differences in the end-result material.
For example, one allotrope, diamond, is the hardest and highest thermally conductive of any natural material, whereas another, graphite, is soft enough to be used in pencils, and is highly conductive of electricity.
Graphene is carbon allotrope that exists in thin, 2-dimensional layers, with the carbon atoms arranged in a honeycomb formation. Scientists had theorised its existence for years, but it was not isolated and characterised until 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester, UK. The pair won the 2010 Nobel Prize in Physics for their work.
The structure of carbon atoms in graphene.
Graphene is a highly conductive, flexible and transparent – this means it can be used in electronics, medical biotechnology, and a variety of other innovative solutions.
Another innovative material made from carbon is carbon fibre, which can then produce carbon-fibre reinforced polymer (CFRP). CFRP is a polymer interwoven with fibres of carbon, which is 5-10μm in diameter. The mixture of these two materials gives an extremely strong but lightweight material, useful in building products from aerospace and automotive, to sports equipment and technology.
Fueling the world
The name carbon comes from the Latin carbo meaning coal, and until recently most of our energy was generated by the consumption of carbon through the burning of naturally occurring carbon-based fuels, or fossil fuels. When these fuels, such as coal, natural gas and oil, are burnt, the combustion reaction generates carbon dioxide (CO2).
CO2, produced by burning fossil fuels, is thought to be a contributor to climate change. Image: Pixabay
High production of the by-product CO2, and its release into the atmosphere, is considered to have a negative environmental impact and is thought to contribute to global warming and climate change. Fossil fuels are not a renewable resource and supplies are expected to diminish in the next 50-100 years.
Consequently, there has been a movement towards more renewable energy, from wind, solar and hydropower, driving a move towards a low-carbon economy. These energy sources are generally considered to be better for the environment, with lower amounts of CO2 being produced.
Chemical engineer Jennifer Wilcox previews some amazing technology to scrub carbon from the air, using chemical reactions that capture and reuse CO2. Video: TED
In this strive for a low-carbon economy, new technology is being used that prevents the release of CO2 into the atmosphere in the first place. Carbon capture and storage (CCS) takes waste CO2 from large-scale industrial processes and transports it to a storage facility. This CCS technology is one of the only proven, effective methods of decarbonisation currently available.
‘Biodegradable plastics have become more cost-competitive with petroleum-based plastics and the demand is growing significantly, particularly in Western Europe, where environmental regulations are the strictest,’ says Marifaith Hackett, director of specialty chemicals research at analysts IHS Markit. The current market value of biodegradable plastics is set to exceed $1.1bn in 2018, but could reach $1.7bn by 2023, according to IHS Markit’s new report.
In 2018, the report finds that global demand for these polymers is 360,000t, but forecasts an average annual growth rate of 9% for the five years to 2023 – equivalent to a volume increase of more than 50%. Western Europe holds the largest share (55%) of the global market, followed by Asia, and Australia and New Zealand (25%), then North America (19%).
Here’s how much plastic trash Is littering the Earth. Video: National Geographic
In another report released in May 2018, the US Plastics Industry Association (PLASTICS) was similarly optimistic, finding that the bioplastics sector (biodegradables made from biological substances) is at ‘a growth cycle stage’. It predicts the US sector will outpace the US economy as a whole by attracting new investments and entrants, while also bringing new products and manufacturing technologies to make bioplastics ‘more competitive and dynamic’.
As bioplastics product applications continue to expand, the dynamics of industry growth will continue to shift, the report notes. Presently, packaging is the largest market segment at 37%, followed by bottles at 32%. Changes in consumer behaviour are expected to be a significant driver.
Many countries, including China and the UK, have introduced plastic waste bans to tackle the problem. Image: Pixabay
Changes in US tax policy, particularly the full expensing of capital expenditure, should support R&D in bioplastics,’ says Perc Pineda, chief economist at PLASTICS. ‘The overall low cost of energy in the US complements nicely with R&D activities and manufacturing, which generates a stable supply of innovative bioplastic products.’ He points, for example, to efforts by companies and collaborations to develop and launch, at commercial scale, a 100% bio-based polyethylene terephthalate (PET) bottle as a case in point. Most PET bottles currently contain around 30% bio-based material.
Biopharmaceuticals are sourced from living organisms.
Researchers at Massachusetts Institute of Technology (MIT), US, have developed a portable drug manufacturing system that can make several different biopharmaceuticals to be used in precision medicine or to treat outbreaks in developing countries.
Biopharmaceuticals are drugs made up of proteins such as antibodies and hormones, and are produced in bioreactors using bacteria, yeast or mammalian cells. They must be purified before use, so the process has dozens of steps and it can therefore take weeks or months to produce a batch.
The Challenges in Manufacturing Biologics. Video: Amgen
Due to the complex nature of the process and its time restrictions, biopharmaceuticals are usually produced at large factories dedicated to a single drug – often one that can treat a wide range of patients.
To help supply smaller, more specific groups of patients with drugs, a group of researchers at MIT have developed a system that can be easily configured to produce three different pharmaceuticals – human growth factor, interferon alpha 2b and granulocyte colony-stimulating factor – all of a comparable quality to commercially available counterparts.
Biopharmaceuticals can treat autoimmune diseases, such as arthritis. Image: Pixabay
‘Traditional biomanufacturing relies on unique processes for each new molecule that is produced,’ said J Christopher Love, a Chemical Engineering Professor at MIT’s Koch Institute for Integrative Cancer Research. ‘We’ve demonstrated a single hardware configuration that can produce different recombinant proteins in a fully automated, hands-free manner.’
2D materials have a thickness of just one molecule, which makes them especially promising for use in quantum computing, as electrons are restricted by movement across two dimensions, as the wavelength of the electron is longer than the thickness of the material.
The most well known of these new materials is graphene – a single layer or carbon – which since its Nobel prize-winning synthesis in 2004 has been posited as a game-changer in applications ranging from tissue engineering and water filtration to energy generation and organic electronics.
Now, an international team at DTU led by Assistant Professor Kasper Steen Pedersen has synthesised a novel nanomaterial with electrical and magnetic properties that the researchers claim make it suitable for future quantum computers and other applications in electronics.
Since graphene’s discovery, hundreds of new 2D materials have been synthesised, but the new material, published in Nature Chemistry, is based on a different concept. While the other 2D material candidates are all inorganic, chromium-chloride-pyrazine (chemical formula CrCl2(pyrazine)2) is an organic-inorganic hybrid material.
From monitoring our heart rate and generating renewable energy to keeping astronauts safe in space, a number of novel applications for carbon nanotubes have emerged in recent months.
Academic and industrial interest around carbon nanotubes (CNTs) continues to increase, owing to their exceptional strength, stiffness and electronic properties.
Over the years, this interest has mainly focused on creating products that are both stronger and lighter, for example, in the sporting goods sector, but recently many ‘quirkier’ applications are beginning to appear.
Carbon nanotubes are already used in sporting goods such as tennis racquets. Image: Steven Pisano/Flickr
At Embry-Riddle Aeronautical University in Prescott, Arizona, for example, researchers are currently working with NASA on new types of nano sensors to keep astronauts safer in space.
The Embry-Riddle team – along with colleagues at LUNA Innovations, a fibre-optics sensing company based in Virginia, US – have focused on developing and refining smart material sensors that can be used to detect stress or damage in critical structures using a particular class of CNT called ‘buckypaper’.
The next step in nanotechnology | George Tulevski. Video: TED
With buckypaper, layers of nanotubes can be loosely bonded to form a paper-like thin sheet, effectively creating a layer of thousands of tiny sensors. These sensor sheets could improve the safety of future space travel via NASA’s inflatable space habitats’ – pressurised structures capable of supporting life in outer space – by detecting potentially damaging micrometeroroids and orbital debris (MMOD).
CNTs coated on a large flexible membrane on an inflatable habitat, for instance, could accurately monitor strain and pinpoint impact from nearby MMODs.
A catalyst is a substance that reduces the energy input required for a reaction – many industrial processes use a catalyst to make them feasible and economic.
There are many types of catalysts for different applications, and zeolite catalysts are used commercially to reduce the negative effects of exhaust fumes from diesel engines and produce fuels more efficiently. Catalysts can be studied with light, in a process called spectroscopy, to help understand how they work.
My PhD research has greatly benefitted from the use of synchrotron radiation. It helped me to gain detailed mechanistic insight into how the zeolite catalyst works. To date, I have completed four scientific visits at the Diamond Light Source, which is the UK’s national synchrotron facility, located in Oxfordshire.
Diamond Light Source is the UK’s national synchrotron science facility, located in Oxfordshire. It was opened in June 2014 to support industrial and academic research.
What is a synchrotron?
Diamond Light Source. Image credit: Diamond Light Source
A synchrotron generates very bright beams of light by accelerating electrons close to the speed of light and bending them through multiple magnets. The broad spectrum of light produced, ranging from X-rays to infrared (IR) light, is selectively filtered at the experimental laboratories (beamlines), where a specific region of the electromagnetic spectrum is utilised. My work uses the IR part of the electromagnetic spectrum. IR light has the right energy to probe bond stretches and deformations, allowing molecular observations and determination.
A highlight from last year has been attending a joint beamtime session with Prof Russell Howe and Prof Paul Wright at Diamond’s IR beamline (MIRIAM, B22). The MIRIAM beamline is managed by Dr Gianfelice Cinque and Dr Mark Frogley.
The synchrotron enables us to capture the catalyst in action during the methanol to hydrocarbons reaction. The changes in the zeolite hydroxyl stretches we observe correlate with the detection of the first hydrocarbon species downstream.
A cartoon illustration of the evolution of the zeolite hydroxyl stretch band during the methanol to hydrocarbons process. Image credit: Ivalina Minova
What is it like researching at Diamond?
My access to Diamond is typically spread over six-month intervals. To secure beamtime, we have to submit a two-page research proposal. This is assessed by a scientific peer review panel and allocated three or four days to complete the proposed experiments.
The field of regenerative medicine is at a ‘pivotal point’ in its development, according to a panel of experts speaking at the Bio meeting in Boston in June 2018.
The past six months alone saw four new product approvals, which could be the ‘beginning of a large number of successes’, said moderator Morrie Ruffin, Managing Director of the Alliance for Regenerative Medicine, which now has over 300 members.
Clinical results emerging from cell therapies over the next two years will be comparable with the successes seen with CAR-T cancer therapies, predicts Mike Scott, Vice-President of Product development at Toronto-based Blue Rock Therapeutics, whose lead product uses pluripotent stem cells to grow new neurons that restore the lost dopamine function in Parkinson’s patients.
‘The area of regenerative medicine allows us to do something audacious: to strive for cures. If you think of CAR-T and gene therapies, there’s every reason to say we can achieve the same with regenerative medicines,’ agreed Felicia Pagliuca, Co-Founder of Boston biotech company Semma Therapeutics.
Semma aims to replace the lost pancreatic beta cells of patients with Type 1 diabetes with its insulin producing equivalents grown in the lab. The technology is currently at preclinical stage.
Regenerative medicine could help to treat diseases like type 1 diabetes, in which pancreatic cells function abnormally. Image: Pixabay
Storing placental and cord blood cells at birth may no longer be necessary in the future, the researchers suggested. Traditional stem cell therapy approaches have used mesenchymal stem cells from these sources to regrow tissues and organs by differentiation into multiple cell types. However, newer technologies are increasingly making new cell types from pluripotent stem cells generated directly from adult cells such as skin.
Engineers say they have demonstrated a cost-effective way to remove carbon dioxide from the atmosphere. The extracted CO2 could be used to make new fuels or go to storage.
The process of direct air capture (DAC) involves giant fans drawing ambient air into contact with an aqueous solution that traps CO2 . Through heating and several chemical reactions, CO2 is re-extracted and ready for further use.
‘The carbon dioxide generated via DAC can be combined with sequestration for carbon removal, or it can enable the production of carbon-neutral hydrocarbons, which is a way to take low-cost carbon-free power sources like solar or wind and channel them into fuels to decarbonise the transportation sector,’ said David Keith, founder of Carbon Engineering, a Canadian clean fuels enterprise, and a Professor of Physics at Harvard University, US.
Fuel from the Air – Sossina Haile. Video: TEDx Talks
DAC is not new, but its feasibility has been disputed. Now, Carbon Engineering reports how its pilot plant in British Columbia has been using standard industrial equipment since 2015. Keith’s team claims that a 1 Mt- CO2 /year DAC plant will cost $94-$232/ton of CO2 captured. Previous theoretical estimates have ranged up to $1000/ton.
Building roads with wastes can deliver a heap of performance as well as environmental benefits – so long as they don’t become a dumping ground for discarded products.
With an estimated value of around €16 trillion, Europe’s road network is its ‘most valuable asset’, according to the European Asphalt Pavement Association (EAPA). It’s also built on what many of us might consider a mountain of rubble.
‘Over the years, almost every conceivable waste material has been put into roads,’ said Fred Parrett, speaking at an SCI-organised event at a University College London, UK in March 2018. The list includes everything from crushed glass and incinerator ash to cellulose fibres and crumb rubber from end-of-life tyres – or even discarded plastic wastes.
But while using wastes in asphalt can potentially deliver big environmental and performance benefits, road experts warn that is not the best option for all wastes. In the UK, a recent survey by the Asphalt Industry Alliance revealed that the length of roads in England and Wales that could fail if not maintained in the next 12 months would stretch almost around the world.
Road cracks form when asphalt fatigues and the road loses its tensile strength. Image: MaxPixel
Roads start to deteriorate when the bitumen ‘glue’ that binds the aggregates together becomes harder and more brittle over time, causing potholes and cracks start to appear – a process accelerated by solar UV, oxygen, heat and cold, and particularly the freeze-thawing of water.
Bitumen additives or ‘modifiers’ help to slow this process down, but most of the traditional modifiers are expensive and derived from non-renewable fossil fuels. Bitumen substitutes made from end-of-life tyres and plastics wastes should potentially offer a cheaper, more sustainable option – but only if they improve rather than impair performance.
Traditional electronics are made from rigid and brittle materials. However, a new ‘self-healing’ electronic material allows a soft robot to recover its circuits after it is punctured, torn or even slashed with a razor blade.
Made from liquid metal droplets suspended in a flexible silicone elastomer, it is softer than skin and can stretch about twice its length before springing back to its original size.
Soft Robotics & Biologically Inspired Robotics at Carnegie Mellon University. Video: Mouser Electronics
‘The material around the damaged area automatically creates new conductive pathways, which bypass the damage and restore connectivity in the circuit,’ explains first author Carmel Majidi at Carnegie Mellon University in Pittsburgh, Pennsylvania. The rubbery material could be used for wearable computing, electronic textiles, soft field robots or inflatable extra-terrestrial housing.
‘There is a sweet spot for the size of the droplets,’ says Majidi. ‘We had to get the size not so small that they never rupture and form electronic connections, but not so big they would rupture even under light pressure.’
We begin our new series breaking down key innovations in agriculture with the Haber-Bosch process, which enabled large-scale agriculture worldwide.
Ammonia – a compound of nitrogen and hydrogen – is therefore a key ingredient in fertilisers, allowing farmers to replenish the soil with nitrogen at will. As well as fertilisers, ammonia is used in pharmaceuticals, plastics, refrigerants, explosives, and in numerous industrial processes.
But how is it made? At the turn of the 20th Century, ammonia was mostly mined from deposits of niter (also known as saltpetre – the mineral form of potassium nitrate), but the known reserves would not satisfy predicted demands. Researchers had to find alternative sources.
Fritz Haber (left) and Carl Bosch (right) created and commercialised the process.
Atmospheric nitrogen, which makes up almost 80% of air, was the obvious feedstock – its supply, to all intents and purposes, being infinite. But reacting atmospheric nitrogen, which is exceptionally stable owing to its strong triple bonds, posed a challenge for chemists globally.
In 1905, German chemist Fritz Haber cracked the riddle of fixing nitrogen from air. Using high pressure and an iron catalyst, Haber was able to directly react nitrogen and hydrogen gas to create liquid ammonia.
His process was soon scaled up by BASF chemist and engineer Carl Bosch, becoming known as the Haber-Bosch process, and this would lead to the mass production of agricultural fertilisers and a phenomenal increase in the growth of crops for human consumption.
The Haber-Bosch process is conducted at a high pressure of 200 atmospheres and reaction temperatures of 450°C. It also requires a large feedstock of natural gas, and there is a global research and development effort to replace the process with a more sustainable alternative – just as the Haber-Bosch process replaced niter mining over a century ago.
A 3D battery made using self-assembling polymers could allow devices like laptops and mobile phones to be charged much more rapidly.
Usually in an electronic device, the anode and cathode are on either side of a non-conducting separator. But a new battery design by Cornell University researchers in the US intertwines the components in a 3D spiral structure, with thousands of nanoscale pores filled with the elements necessary for energy storage and delivery.
This type of ‘bottom-up’ self-assembly is attractive because it overcomes many of the existing limitations in 3D nanofabrication, enabling the rapid production of nanostructures at large scales.
In the Cornell design, the battery’s anode is made of gyroidal (spiral) thin films of carbon, generated by block copolymer self-assembly. They feature thousands of periodic pores around 40nm wide. The pores are coated with a 10 nm-thick separator layer, which is electronically insulating but ion-conducting. Some pores are filled with sulfur, which acts as the cathode and accepts electrons but doesn’t conduct electricity.
Adaptive battery can charge in seconds. Video: News Direct
‘This is potentially ground-breaking, if the process can be scaled up and the quality of the electrodes can be ensured,’ comments Yury Gogotsi, director of A.J. Drexel Nanomaterials Institute, Philadelphia, US. ‘But this is still an early-stage development, proof of concept. The main challenge is to ensure that no short-circuits occur in the structure.
The eighth in its series, the Kinase 2018: towards new frontiers 8th RSC/SCI symposium on kinase design took place at the Babraham Institute, Cambridge – a world-leading biomedical science research hub.
The focus of the event was to provide a space for the discussion of the ever-evolving kinase inhibitor landscape, including current challenges, opportunities and the road ahead.
A kinase is an enzyme that transfers phosphate groups to other proteins (phosphorylation). Typically, kinase activity is perturbed in many diseases, resulting in abnormal phosphorylation, thus driving disease. Kinases inhibitors are a class of drug that act to inhibit aberrant kinases activity.
Cell signalling: kinases & phosphorylation. Image: Phospho Biomedical Animation
Over 100 delegates from across the world working in both academia and industry attended the event, including delegates from GlaxoSmithKline, AstraZeneca, Genentech, and Eli Lilly and Co.
The event boasted world-class speakers working on groundbreaking therapeutics involving kinase inhibitors, including designing drugs for the treatment of triple negative breast cancer, complications associated with diabetes, African sleeping sickness and more.
How can kinase inhibitors revolutionise cancer treatment?
Tsetse flies carry African sleeping sickness. Image: Oregon State University/Flickr
The keynote speaker, Prof Klaus Okkenhaug from Cambridge University, spoke about how the immune system can be manipulated to target and kill cancer cells by using kinase inhibitors.
Klaus is working on trying to better understand the effects of specific kinase inhibitors on the immune system in patients with blood cancer.
He also explored how his work can benefit those with APDS, a rare immunodeficiency disorder, which he helped to elucidate on a molecular level.
Solving graft rejection, one kinase at a time
Organ grafts are a surgical procedure where tissue is moved from one site in the body to another. Image: US Navy
Improving tolerance to organ grafts is at the forefront of transplantation medicine. James Reuberson from UCB Pharma UK, highlighted how kinase inhibitors can be utilised to improve graft tolerance.
James took the delegates on a journey, describing the plight of drug discovery and development, highlighting the challenges involved in creating a drug with high efficacy. While still in its infancy, James’ drug shows potential to prolong graft retention.
Nature is providing the inspiration for a range of novel self-repairing materials – by mimicking bone healing to fix ceramics, for instance, or using bacteria to heal a ‘wound’ in an undersea power cable.
Self-healing polymers are already well known. A familiar example is self-healing composite aircraft wings: if a crack appears, microcapsules in the composite matrix rupture, releasing ‘sealant’ into the crack to repair it. Recently, however, researchers have expanded the range of ‘repairable’ substances to include other promising materials – including rubber, ceramics and even electronic circuits.
Paul Race, senior lecturer in biochemistry at Bristol University, UK, heads a multi-disciplinary project to develop new types of self-healing materials. The three-year project, called Manufacturing Immortality, is in partnership with six other UK universities and involves biologists, chemists and engineers. ‘Our aim is to create new materials that can regenerate – or are very difficult to break – by combining biological and non-biological components – such as bacteria with ceramics, glass or electronics,’ says Race, whose own research interests include the stereochemistry of antibiotics, and the activities of enzymes.
The project’s approach is quite different to most polymer-based self-healing technologies, which typically rely on simple hydrogen bonds and reversible covalent bonds. ‘There are limits to the polymer chemistry approach,’ he says. ‘We’re trying to take inspiration from biology, which uses much more elaborate and powerful approaches to achieve more dramatic repair.’
Self-healing rubber links permanent covalent bonds (in red) with reversible hydrogen bonds (green). Image: Peter and Ryan Allen/ Harvard press
As an example, Race refers to what happens when we break or bone or receive a bad cut, which triggers a cascade of events in which the body detects the damage and responds appropriately. The team’s work is aimed at three broad application areas: safety critical systems; energy generation; and consumer electronics.
In April, EU Members States voted for a near complete ban of the use of neonicotinoid insecticides – an extension to restrictions in place since 2013. The ban, which currently includes a usage ban for crops such as maize, wheat, barley, and oats, will be extended to include others like sugar beet. Use in greenhouses will not be affected.
Some studies have argued that neonicotinoids contribute to declining honeybee populations, while many other scientists and farmers argue that there is no significant field data to support this.
In response to the recent ban, SCI’s Pest Management Science journal has made a number of related papers free to access to better inform on the pros and cons of neonicotinoids.
Like to know more about neonicotinoids? Click the links below…
Robin Blake and Len Copping discuss the recent political actions on the use of neonicotinoids in agriculture, and the UK’s hazard-based approach following field research unsupportive of an outright ban on the insecticides.
Conflicting evidence on the effects of neonicotinoids on the honeybee population has beekeepers confused and has led to the increase in the use of older insecticides, reports one beekeeper.
Following the 2013 EU partial ban on neonicotinoids, experts called for good field data to fill knowledge gaps after questioning of the validity of the original laboratory research. To encourage future debate, realistic field data is essential to discouraging studies using overdoses that are not of environmental relevance.
This paper describes the consequences of the ban on neonicotinoid seed treatments on pest management in oilseed rape, including serious crop losses from cabbage stem flea beetles and aphids that have developed resistance to other insecticides.
The Research Articles
Particle size is one of the most important properties affecting the driftability and behaviour of dust particles scraped from pesticide dressed seeds during sowing. Different species showed variable dust particle size distribution and all three techniques were not able to describe the real-size distribution accurately.
Aside from particle size, drift of scraped seed particles during sowing is mainly affected by two other physical properties – particle shape and envelope density. The impact of these abraded seed particles on the environment is highly dependable on their active ingredient content. In this study, the envelope density and chemical content of dust abraded from seeds was determined as a function of particle size for six seed species.
Substantial honey bee colony losses have occurred periodically in the last decades, but the drivers for these losses are not fully understood. Under field conditions, bee colonies are not adversely affected by a long‐lasting exposure to sublethal concentrations of thiacloprid – a popular neonicotinoid. No indications were found that field‐realistic and higher doses exerted a biologically significant effect on colony performance.
Researchers claim to be ‘on the cusp’ of creating a new generation of devices that could vastly expand the practical applications for 3D and 4D printing. At the ACS meeting in New Orleans in March, H. Jerry Qi at Georgia Institute of Technology reported the development of a prototype printer that not only simplifies and speeds up traditional 3D printing processes, but also greatly expands the range of materials that can be printed.
4D printing would allow 3D printed components to change their shape over time after exposure to environmental triggers such as heat, light and humidity. In 2017, for example, Qi’s group, in collaboration with scientists at the Singapore University of Technology and Design, used a composite made from an acrylic and an epoxy along with a commercial heat source to create 4D objects, such as a flower that can close its petals or a star that morphs into a dome. These objects transformed 90% faster than previously possible because the team incorporated the mechanical programming steps directly into the 3D printing process.
H Jerry Qi (right) with Glaucio Paulino, a professor at Georgia Tech’s School of Civil and Environmental Engineering, hold 3D printed objects that use tensegrity – a structural system of floating rods in compression and cables in continuous tension. Image: Rob Felt
‘As a result, the 3D printed component can rapidly change its shape upon heating,’ the researchers reported. ‘This second shape largely remains stable in later variations in temperature such as cooling back to room temperature. Furthermore, a third shape can be programmed by thermomechanical loading, and the material will always recover back to the permanent (second) stable shape upon heating.’
In their latest work, the group sought to create an ‘all-in-one’ printer that combines four different printing techniques: aerosol, inkjet, direct ink write and fused deposition modelling. The resulting machine can handle a range of materials such as hydrogels, silver nanoparticle-based conductive inks, liquid crystal elastomers and shape memory polymers (SMPs).
It can even create electrical wiring that can be printed directly onto an antenna, sensor or other electrical device. The process uses a direct-ink-write method to produce a line of silver nanoparticle ink, which is dried using a photonic cure unit – whereupon the nanparticles coalesce to form conductive wire. Lastly, the wires are encased in plastic coating via the printer’s inkjet component.
The researchers can also use the printer to create higher quality SMPs capable of making more intricate shape changes than in the past. And to also make materials comprising both harder and softer or more bendable regions, Qi explained. Here, the printer projects a range of white, grey or black shades of light to trigger a polymer crosslinking reaction dependent on the greyscale of shade shone on the component part. Brighter light shades create harder component parts than darker shades.
In terms of applications, Qi’s own particular interest is in developing ‘soft robots’ with sensory properties more akin to human skin than the traditional metallic or rigid robots with which we are probably more familiar. Sensory robots, Qi says, will play a big role in future safety for human workers working alongside robots. As a first step in that direction, his group is currently working with Children’s Healthcare of Atlanta to investigate whether the new technology could make prosthetic hands for children born with malformed arms – a condition not covered by most medical insurance policies. The idea would be to combine multiple different sensors to create a functional replacement hand.
In future, new 3D and 4D printers will ultimately be capable of printing whatever we might want to make, Qi says. He points, for example, to work by Jennifer Lewis at the University of Harvard to 3D print a Li-ion battery – an essential component of mobile phones and computer laptops. However, Qi notes that 3D printing does not always make economic or practical sense for all items. Instead, a big consideration will be ‘pick and place’ technology that mixes and matches printed and non-printed components to assemble the desired objects.
The Concorde was the first commercial supersonic aircraft to have been built. Image: Wikimedia Commons
In 2011, a chance encounter under the wings of Concorde at Duxford Air Museum, Cambridge, with Trinity College Dublin Professor Johnny Coleman, would set in motion a series of events that would lead, six years later, to the development of a 20t/year graphene manufacturing plant.
As soon as we got talking, I was impressed by Johnny’s practical, non-nonsense approach to solving the scalability issue with graphene production.
Coleman is a physicist, not a chemist, and believed that the solution lay in mechanical techniques. Following the conference, Thomas Swan agreed to fund his group for four years to develop a scalable process for the manufacture of graphene.
Just a nanometer thick, graphene consists of a single layer of carbon atoms joined in a hexagonal lattice. Image: Pixabay
Coleman and his team initially considered sonication – when sound waves are applied to a sample to agitate its particles – but quickly ruled it out due to its lack of scalability. He then sent one of his researchers out to the shops to buy a kitchen blender. They threw together some graphite, water, and a squirt of washing-up liquid into the blender, switched it on, and went for a cup of coffee.
When they later analysed the ‘grey soup’ they had created, they found they had successfully made few-layer graphene platelets. The group then spent months optimising the technique and worked closely with Thomas Swan scientists to transfer the process back to Thomas Swan’s manufacturing HQ in Consett, Ireland.
Graphene is 300 times stronger than steel.
The plant can make up to 20t/year of high quality graphene. It uses a high sheer continuous process to exfoliate graphite flakes into few-layer graphene platelets in an aqueous dispersion.
The dispersion is stabilised by adding various surfactants before separating out the graphene using continuous cross-flow filtration devices developed with the support of the UK’s Centre for Process Innovation (CPI), part of the High Value Manufacturing Catapult – a government initiative focused on fostering innovation and economic growth in specific research areas.
Using sticky tape, scientists pulled off graphene sheets from a block of graphite. Image: Pixabay
This de-risking of process development using a Catapult is a classic example of effective government intervention to support innovative SMEs. CPI not only showed us it worked, but also optimised the technique for us.
The company quickly realised that selling graphene in a powder form with no application data was not going to work. Instead, we developed a range of performance data to assist the sales team by highlighting what graphene can do if adopted into a range of applications.
The potential of graphene can be commercialised using composites. Video: The University of Manchester – The home of graphene
We also moved to make the product available in ‘industry friendly’ forms such as epoxy resin dispersions or polymer masterbatches. This move, slightly downstream from the raw material, has recently led to Thomas Swan announcing its intention to expand its range of formulated graphene materials, with a prototype product focusing on the manufacture of a carbon fibre composite.
Our application data shows that graphene has significant benefits as an industrial additive. Presenting this data to composite-using downstream customers is starting to open doors and create supply chain partnerships to get a raw material all the way to a fully integrated application.
Andre Geim and Kostya Novoselov won the 2010 Nobel Prize in Physics for their discovery of graphene. Image: Wikimedia Commons
The move downstream, to develop useable forms of graphene, is common in the industry, with most graphene suppliers now making their products available as an ink, dispersion or masterbatch. Thomas Swan’s experience with single-wall carbon nanotubes has made us aware of the need to take more control of graphene application development to ensure rapid market adoption.
Graphene applications drawing most interest include composites, conductive inks, battery materials, and resistive heating panels, although much of this demand is to satisfy commercial R&D rather than full commercial production.
Graphene science | Mikael Fogelström | TEDxGöteborg. Video: TEDx Talks
Thanks to innovations like our continuous high sheer manufacturing process, Thomas Swan believes that graphene is about to become very easy to make. Before it can be considered a commodity, however, it will also need to deliver real value in downstream applications. Therefore, the company is also increasing its efforts to understand market driven demand and application development.
As the initial hype over the ‘wonder’ material graphene starts to wane, progress is being made to develop scalable manufacturing techniques and to ensure graphene delivers some much-promised benefits to downstream applications.
Researchers at the University of Waterloo, Canada, have developed an innovative method for capturing renewable natural gas from cow and pig manure for use as a fuel for heating homes, powering industry, and even as a replacement for diesel fuel in trucks.
It is based on a process called methanation. Biogas from manure is mixed with hydrogen, then run through a catalytic converter, producing methane from carbon dioxide in the biogas through a chemical reaction.
A biogas plant. Image: Pixabay
The researchers claim that power could be taken from the grid at times of low demand or generated on-site via wind or solar power to produce the hydrogen.
The renewable natural gas produced would yield a large percentage of the manure’s energy potential and efficiently store electricity, while emitting a fraction of the gases produced when the manure is used as a fertiliser.
‘The potential is huge,’ said David Simakov, Professor of Chemical Engineering at Waterloo. 'There are multiple ways we can benefit from this single approach.’
See a Farm Convert Pig Poop Into Electricity. Video: National Geographic
Using a computer model of a 2,000-head dairy farm in Ontario, which already collects manure and converts it into biogas in anaerobic digesters before burning it in generators, the researchers tested the concept.
They estimated that a $5-million investment in a methanation system would have a five-year payback period, taking government subsidies for renewable natural gas into account.
'This is how we can make the transition from fossil-based energy to renewable energy using existing infrastructure, which is a tremendous advantage,’ Simakov said.
Tweaking the chemical structure of the antibiotic vancomycin may offer a new route to tackle the burgeoning problem of antibiotic-resistant bacteria, researchers in Australia have discovered.
Vancomycin has been used since the late 1950s to treat life-threatening infections caused by Gram-positive bacteria, including methicillin-resistant S. aureus (MRSA). The antibiotic works by binding to a precursor of the cell wall component, peptidoglycan, Lipid II, thus inhibiting bacterial growth.
Lipid II is present in both Gram-positive and Gram-negative bacteria. However, in Gram-negative bacteria it is protected by an outer membrane. In Gram-positive bacteria, Lipid II is embedded in the cell membrane but part of the molecule – a pentapeptide component – sticks out, which is what vancomycin binds to.
The researchers at the University of Queensland’s Institute for Molecular Biology (IMB), led by director of superbug solutions Matt Cooper, reasoned that if they could increase the ability of vancomycin to bind to the bacterial membrane, this would make it more difficult for bacteria to develop resistance to it.
‘Our strategy was to add components to vancomycin so that the new derivatives – which we call “vancapticins” – could target more widely the membrane surface,’ explains Mark Blaskovich, senior research chemist at IMB. ‘By providing two binding sites – the membrane surface and the membrane-embedded Lipid II - this allows binding to resistant strains in which the Lipid II has mutated to reduce interactions with vancomycin.’
In addition, the researchers say that the vancapticins have been designed to take advantage of compositional differences between mammalian and bacteria cell membranes – ie bacterial cells have a greater negative charge. The vancapticins have greater selectivity for bacterial cells over mammalian cells, potentially reducing off-target effects and giving a better safety profile. A series of structure–activity studies showed that some of the vancapticins were more than 100 times more active than vancomycin.
Hospital-Associated Methicillin-resistant Staphylococcus aureus (MRSA) Bacteria. Image: NIAID
This membrane-targeting strategy, the researchers say, has the potential to ‘revitalise’ antibiotics that have lost their effectiveness against recalcitrant bacteria as well as enhance the activity of other intravenous-administered drugs that target membrane associated receptors.
John Mann, emeritus professor of chemistry at Queen’s University Belfast, UK, comments: ‘Bacteria have developed numerous strategies to modify the binding, uptake and expulsion of antibiotics, and thus develop resistance. So, it is especially exciting to see the development of these new vancomycin derivatives that enhance the membrane binding properties of the antibiotic, thus enhancing its efficacy and beating the bacteria at their own game.’
With a raft of developments in engineered timber, architects and designers and increasingly turning to wood as their material of choice. In advance of SCI’s Timber in Construction Materials event, here are five facts about this spectacularly versatile, sustainable material.
1. There’s a super-dense wood that’s as strong as steel, but six times lighter
Liangbing Hu and Teng Li pose with their chemically treated bulletproof wood. University of Maryland
A team at the University of Maryland (UMD), US, have made wood 12 times stronger and 10 times tougher than in its natural form.
Their process consists of boiling the wood in a bath of sodium hydroxide and sodium sulphite, heating it, then subjecting it to compression.
Leading the research, Liangbing Hu, assistant professor in UMD’s department of materials science, said, ‘This could be a competitor to steel or even titanium alloys, it is so strong and durable. It’s also comparable to carbon fibre, but much less expensive.’
The team shot bullet-like projectiles at their super wood to test it – predictably, they blew straight through natural wood, but were stopped by the new material.
The discovery could make even soft, fast-growing woods, such as balsa, more useful in buildings – offering a much quicker carbon payback than slower-growing denser hardwoods such as teak.
The researchers claim the process will work on any kind of timber. Many methods for densifying wood have been tried over the years, such as exposing the wood to steam or ammonia and then rolling it, like a steel bar, but the results have been less than ideal – particularly due to wood’s tendency to expand and contract in response to atmospheric water.
2. It doesn’t have to burn.
You’d be forgiven for associating wood with fire – but engineered timber products such as cross-laminated timber (CLT) have repeatedly demonstrated excellent fireproofing qualities in testing.
The moisture content of timber means that CLT panels char slowly and predictably. This creates an insulating layer that protects the core of the panel, allowing it to maintain its structural integrity for up to three hours.
3. Timber towers are coming
Proposed design for Sumitomo Forestry’s 2041 tower.
Picture a skyscraper. In your mind’s eye, it’s all steel and glass, right?
That’s set to change. Just this month, Japanese timber company Sumitomo Forestry revealed plans for the world’s tallest wooden building in Tokyo. At 350 metres, the proposed skyscraper is taller than any in the country – although taller buildings could crop up before it is built; Sumitomo plans to complete the tower to mark the company’s 350th anniversary in 2041.
The company plans for 90% of its hybrid structure to be wood – a whopping 185,000 cubic metres of timber are planned for use in the ‘braced tube structure’ that features minimal steel – the columns and beams will be hybrid steel and timber, and there will be some additional steel braces in the construction. The tower would contain a hotel, residential units, offices, and shops – surrounded by large, plant-covered balconies.
Today’s tallest timber structure is the Brock Commons Tallwood House, a student residence building at the University of British Columbia (UBC), Canada.
Standing at 53 metres, the 18-storey block was prefabricated off-site, and then constructed in just 70 days. The elevator and stair shafts were made from concrete, but the vertical columns and floor plates were constructed using glue-laminated timber – multiple layers of dimensioned lumber bonded by durable, moisture-resistant structural adhesives.
4. London is home to the world’s largest timber building
Not the tallest – the largest. Dalston Works – a 10-storey, 121-unit housing development in East London, was completed in 2017. You wouldn’t know from it’s outer appearance – it’s clad in brick – but from the first floor upwards, the walls, floors, ceiling, stairs and a lift core are all made from CLT.
It was designed by Waugh Thistleton – a firm that has pioneered use of CLT since 2003. The timber frame offers 50% less embodied CO2 (calculated by the amount of energy required in its production) than a traditional concrete frame, and locks in 2,600 tonnes of CO2.
5. Wood is 100% renewable (as long as it’s sustainably managed)
Unlike bricks and concrete, which rely on the extraction of a finite supply of raw materials, timber is truly renewable – that is, of course, if another tree is planted when one is felled. Timber also does not require the extreme levels of heat used in the production of steel.
With an ever-increasing demand for data storage, the race is on to develop new materials that offer greater storage density. Researchers have identified a host of exotic materials that use new ways to pack ‘1’s and ‘0’s into ever-smaller spaces.
And, while many of them are still lab curiosities, they offer the potential to improve data storage density by 100 times or more.
Having a moment
Data storage technology has moved quickly away from floppy disks (pictured) and CD-DOMs. Image: Pexels
The principle behind many storage media is to use magnetic ‘read’ and ‘write’ heads, an idea also exploited by many of these new technologies – albeit on a much smaller scale.
A good example is recent work from Manchester University, UK, where researchers have raised the temperature at which ‘single molecule magnets’ can be magnetised. Single-molecule magnets could have 100 times the data storage density of existing memory devices.
In theory, any molecular entity can be used to store data as reversing its polarity can switch it from a ‘1’ to a ‘0’. In this case, instead of reading and writing areas of a magnetic disk, the researchers have created single molecules that exhibit magnetic ‘hysteresis’ – a prerequisite for data storage.
Researchers discuss the circuit boards in development that negotiate Moore’s Law. Video: Chemistry at The University of Manchester
‘You need a molecule that has its magnetic moment in two directions,’ says Nick Chilton, Ramsay Memorial research fellow in the school of chemistry. ‘To realise this in a single molecule, you need very specific conditions.’
In addition to having a strong magnetic moment, the molecule needs a slow relaxation time – that is, the time it takes for the molecule to ‘flip’ naturally from a ‘1’ to a ‘0’. ‘If this time is effectively indefinite, it would be useful for data storage,’ he says.
The key is that the molecule itself must have a magnetic moment. So, while a bulk substance such as iron oxide is ‘magnetic’, individual iron oxide particles are not.
A binary digit, or bit, is the smallest unit of data in computing. The system is used in nearly all modern computers and technology. Image: Pixabay
Chilton and his colleagues have identified and synthesised a single-molecule magnet – a dysprosium atom, sandwiched between two cyclopentadienyl rings – that can be magnetised at 60K. This is 46K higher than any previous single-molecule magnet – and only 17K below the temperature of liquid nitrogen.
Being able to work with liquid nitrogen – rather than liquid helium – would bring the cost of a storage device down dramatically, says Chilton. To do this, the researchers must now model and make new structures that will work at 77K or higher.
Skyrmions may sound like a new adversary for Doctor Who, but they are actually another swirl-like magnetic entity that could be used to represent a bit of digital data.
Scientists at the Max Born Institute (MBI), Germany – in collaboration with colleagues from Massachusetts Institute of Technology, US – have devised a way to generate skyrmions in a controllable way, by building a ‘racetrack’ nanowire memory device that might in future be incorporated into a conventional memory chip.
‘Skyrmions can be conceived as particles – because that’s how they act,’ says Bastian Pfau, a postdoctoral researcher at MBI, as they are generated using a current pulse.
‘Earlier research put a lot of current pulses through a racetrack and created a skyrmion randomly,’ he says. ‘We’ve created them in a controlled and integrated way: they’re created on the racetrack exactly where you want them.’
This racetrack memory device could be incorporated into standard memory chips, say researchers at the Max Born Institute. Credit: Grafix
In fact, skyrmions can be both created and moved using current pulses – but the pulse for creating them is slightly stronger than the one that moves them. The advantage of using a current pulse is that it requires no moving parts.
The resulting racetrack is a three-layer nanowire about 20nm thick – a structure that will hold around 100 skyrmions along a one-micron length of wire.
While the current research is done ‘in the plane’ with the nanowires held horizontally, Pfau says that in the future, wires could be stacked vertically in an array to boost storage capacity. ‘This would increase the storage density by 100. But this is in the future and nobody has made a strip line that’s vertical yet.’
Could magnetic skyrmions hold the answer to better data storage? Video: Durham University
‘The whole function depends on how you create the multi-layer,’ he says. To stand any chance of being commercialised, which might take six or eight years, Pfau says that new materials will be needed.
However, he is confident this will happen – and that the technology can be merged with ‘conventional’ electronic devices.
A new type of wheat, chock full of healthy fibre, has been launched by an international team of plant geneticists. The first crop of this super wheat was recently harvested on farms in Idaho, Oregon, and Washington state in the US, ready for testing by various food companies.
Food products are expected to hit the US market in 2019. They will be marketed for their high content of ‘resistant starch’, known to improve digestive health, be protective against the genetic damage that precedes bowel cancer, and help protect against Type 2 diabetes.
How do carbohydrates impact your health? Video: TED-Ed
‘The wheat plant and the grain look like any other wheat. The main difference is the grain composition: the GM Arista wheat contains more than ten times the level of resistant starch and three to four times the level of total dietary fibre, so it is much better for your health, compared with regular wheat,’ says Ahmed Regina, plant scientist at Australian science agency CSIRO.
Starch is made up of two types of polymers of glucose – amylopectin and amylose. Amylopectin, the main starch type in cereals, is easily digested because it has a highly branched chemical structure, whereas amylose has a mainly linear structure and is more resistant.
Bread and potatoes are foods also high in starch. Image: Pixabay
Breeders drastically reduced easily digested amylopectin starch by downregulating the activity of two enzymes, so increasing the amount of amylose in the grain from 20 to 30% to an impressive 85%.
The non-GM breeding approach works because the building blocks for both amylopectin and amylose starch synthesis are the same. With the enzymes involved in making amylopectin not working, more blocks are then available for amylose synthesis.
‘Resistant starch is starch that is not digested and reaches the large intestines where it can be fermented by bacteria. Usually amylose is what is resistant to digestion,’ comments Mike Keenan, food and nutrition scientist at Louisiana State University, US. ‘Most people consume far too little fibre, so consuming products higher in resistant starch would be beneficial.’
He notes that fermentation of starch in the gut causes the production of short-chain fatty acids such as butyrate that ‘have effects throughout the body, even the mental health of humans’.
The GM wheat will hit US supermarkets in 2019. Image: Pxhere
The super-fibre wheat stems from a collaboration begun in 2006 between French firm Limagrain Céréales Ingrédients, Australian science agency CSIRO, and the Grains Research and Development Corporation, an Australian government agency.
This resulted in a spin out company, Arista Cereal Technologies. After the US, Arista reports that the next markets will be in Australia and Japan.
Psilocybin mushrooms have psychedelic properties. Image: Wikimedia Commons
The psychoactive compound in psychedelic ‘magic mushrooms’ could pave the way for new drugs to treat depression, according to a new study. Patients in the study reported that their mood had lifted, they felt less depressed and were less stressed immediately after taking psilocybin. Nearly half (47%) were still benefiting five weeks after discontinuing treatment.
Robin Carhart-Harris and his team at Imperial College London, UK – the Psychedelic Research Group – gave psilocybin to 19 patients suffering from ‘treatment resistant’ depression, who had failed to benefit from other depression therapies. They were given 10mg initially and 25mg one week later.
The Psychedelic Research Group is the first in 40 years to use LSD in research in the UK since the Misuse of Drugs Act 1971. Image: Pixabay
‘Several of our patients described feeling “reset” after the treatment and often used computer analogies,’ said Carhart-Harris. ’Psilocybin may be giving these individuals the temporary kick start they need to break out of their depressive states.’
Functional MRI scans measuring activity and blood flow in the brain showed marked differences after the treatment. There was reduced blood flow to areas of the brain, including the amygdala, which processes emotional responses, such as stress and fear. Another brain network appeared to ‘stabilise’ after treatment.
‘fMRI scans indicate that the communication within a certain prefronto-limbic circuit known to regulate affective responsiveness, is normalised one day after psilocybin treatment,’ said Imperial College psychologist Tobias Buchborn. ‘This normalisation seems specifically related to the feeling of unity experienced during the psilocybin session.’
The trial didn’t include a control/placebo group for comparison. However, the team plans to compare the effects of psilocybin against a leading antidepressant in a six-week trial in 2018.
Scientists used neuroimaging to track the effectiveness of the treatment.
‘These are exciting, but preliminary findings,’ said Mitul Mehta, professor of neuroimaging & psychopharmacology at King’s College London. ‘It is only a single dose of psilocybin, but this was able to reduce symptoms and produce changes in the same brain networks we know are involved in depression. This impressive study provides a clear rationale for longer-term, controlled studies.’
‘Some of the next challenges are to see if the therapeutic effects hold up in larger groups,’ commented Anil Seth, professor of cognitive and computational neuroscience at Sussex University, UK: ‘And to understand more about how the changes in brain activity elicited by psilocybin underpin both the transient changes in conscious experience the drug produces, as well as the more long-lasting effects on depression.’
Psychedelics: Lifting the veil | Robin Carhart-Harris | TEDxWarwick Video: TEDx Talks
The trial also backs up the results of an earlier study by Robin Carhart-Harris and coworkers in 2016, which found that psilocybin reduced symptoms in 12 treatment resistant patients, five of whom were no longer classed as depressed three months later. Also in 2016, a trial by other researchers in the US demonstrated that a single dose could alleviate the anxiety and depression of people with advanced cancer for six months or longer.
The US is in the midst of a healthcare epidemic. Tens of thousands of people are dying each year from opioid drugs, including overdoses from prescription painkillers such as OxiContin (oxycodone) and the illicit street drug heroin, and each year the numbers rise.
The opioid epidemic is currently killing almost twice as many people as shootings or motor vehicle accidents, with overdoses quadrupling since 1999. According to Gary Franklin, medical director of the Washington State Department of Labour and Industries and a professor of health at the University of Washington, the opioid epidemic is ‘the worst man-made epidemic in modern medical history in the US’.
Montgomery, Ohio, is at the centre of the epidemic, with the most opioid-related deaths per capita this year. Image: Wikimedia Commons
Incredibly, an influx of synthetic opioids is making the problem worse. Fentanyl, a licensed drug to treat severe pain, is increasingly turning up on the street as illicit fentanyl, often mixed with heroin. According to the NCHS, fentanyl and synthetic opioids are blamed for 20,145 of the 64,070 overdose deaths in 2016. Heroin contributed to 15,446 deaths, while prescription opioids caused 14,427.
Fentanyl (C22H28N20), a lipophilic phenylpiperidine opioid agonist, is generally formulated as a transdermal patch, lollipop and dissolving tablet. Like the opioids derived from opium poppies, such as morphine, fentanyl binds to opioid receptors in the brain and other organs of the body, specifically the mu-receptor.
Heroin and other opioids come from the opium poppy. Image: Max Pixel
Such binding mimics the effects of endogenous opiates (endorphins), creating an analgesic effect, as well as a sense of well-being when the chemical binds to receptors in the rewards region in the brain. Drowsiness and respiratory depression are other effects, which can lead to death from an overdose.
Rise of illicit fentanyl
The opioid epidemic can be traced back to the 1990s when pharmaceutical companies began producing a new range of opioid painkillers, including oxycodone, touting them as less prone to abuse. In addition, prescribing rules were relaxed, while advocates championed the right to freedom from pain. Soon, opioids were being prescribed at alarming rates and increasing numbers of patients were becoming hooked.
Why is there an opioid crisis? Video: SciShow
Franklin, who was the first person to report in 2006 on the growing death rate from prescribed opioids, says: ‘OxyContin is only a few atoms different to heroin – I call it pharmaceutical heroin.’
A crackdown on prescribing was inevitable. But then, with a shortage of prescription opioids, addicts turned to illicit – and cheaper – heroin. According to Franklin, 60% of heroin users became addicted via a prescribed opioid. ‘You don’t have to take these drugs for very long before it’s very hard to get off,’ he says: ‘Just days to weeks.’ Heroin use soared and with it increased tolerance, leading users to seek out more potent highs. By 2013, there were almost 2m Americans struggling with an opioid-use disorder.
Drugs to fight drugs
President Trump declared the opioid crisis a public health emergency in October. Image: Pixabay
Attention is finally being given to the epidemic. US president Donald Trump recently declared a public health emergency, although no new funds will be assigned to deal with the crisis.
There is particular interest around research into a vaccine against fentanyl. Developed by Kim Janda at The Scripps Research Institute, California, US, the vaccine, which has only been tested in rodents, can protect against six different fentanyl analogues, even at lethal doses. ‘What we see with the epidemic, is the need to find alternatives that can work in conjunction with what is used right now,’ he says.
This vaccine could treat heroin addiction. Video: Seeker
The vaccine works by taking advantage of the body’s immune system to block fentanyl from reaching the brain. Its magic ingredient is a molecule that mimics fentanyl’s core structure, meaning the vaccine trains the immune system to recognise the drug and produce antibodies in its presence. These antibodies bind to fentanyl when someone takes the drug, which stops it from reaching the brain and creating the ‘high’.
Check out SCI on Twitter here
“One of the highlights of SCI's office decorating competition #wheresciencemeetschristmas ... not that they'll beat us in digital media!"😉🎅🏻🎄”
A huge challenge faced in the pursuit of a mission to Mars is space radiation, which is known to cause several damaging diseases – from Alzheimer’s disease to cancer.
And soon, these problems will not just be exclusive to astronauts. Speculation over whether space tourism is viable is becoming a reality, with Virgin Galactic and SpaceX flights already planned for the near future. The former reportedly sold tickets for US$250,000.
But could questions over the health risks posed hinder these plans?
What is space radiation?
In space, particle radiation includes all the elements on the periodic table, each travelling at the speed of light, leading to a high impact and violent collisions with the nuclei of human tissues.
The type of radiation you would endure in space is also is different to that you would experience terrestrially. On Earth, radiation from the sun and space is absorbed by the atmosphere, but there is no similar protection for astronauts in orbit. In fact, the most common form of radiation here is electrochemical – think of the X-rays used in hospitals.
The sun is just one source of radiation astronauts face in space. Image: Pixabay
On the space station – situated within the Earth’s magnetic field – astronauts experience ten times the radiation that naturally occurs on Earth. The station’s position in the protective atmosphere means that astronauts are in far less danger compared with those travelling to the Moon, or even Mars.
Currently, NASA’s Human Research Program is looking at the consequences of an astronaut’s exposure to space radiation, as data on the effects is limited by the few subjects over a short timeline of travel.
Radiation poses one of the biggest problems for space exploration. Video: NASA
However, lining the spacecraft with heavy materials to reduce the amount of radiation reaching the body isn’t as easy as a solution as it is seems.
‘NASA doesn’t want to use heavy materials like lead for shielding spacecraft because the incoming space radiation will suffer many nuclear collisions with the shielding, leading to the production of additional secondary radiation,’ says Tony Slaba, a research physicist at NASA. ‘The combination of the incoming space radiation and secondary radiation can make the exposure worse for astronauts.’
As heavy materials cannot hamper the effects of radiation, researchers have turned to a more light-weight solution: plastics. One element – hydrogen – is well recognised for its ability to block radiation, and is present in polyethylene, the most common type of plastic.
A thick dust cloud called the Dark Rift blocks the view of the Milky Way. Image: NASA
Engineers have developed plastic-filled tiles, that can be made using astronauts rubbish, to create an extra layer of radiation protection. Water, which is already an essential for space flight, can be stored alongside these tiles to create a ‘radiation storm shelter’ in the spacecraft.
But research is still required. Plastic is not a strong material and cannot be used as a building component of spacecrafts.
Platinum is one of the most valuable metals in the world. Precious and pretty, it’s probably best known for jewelry – and that is almost certainly its oldest use. But its value has become far greater than its decorative ability; today, platinum powers the world. From agriculture to the oil markets, energy to healthcare, we use platinum far more than we realise.
1. Keep the car running
Platinum is needed to make fuel for transport. Image: Pixabay
Platinum catalysts are crucial in the process that converts naphtha into petrol, diesel, and jet-engine fuel, which are all vital to the global economy. The emissions from those petroleum fuels, however, can be toxic, and platinum is also crucial in the worldwide push to reduce them through automotive catalytic converters. In fact, 2% of global platinum use in 2016 was in converting petroleum and 41% went into reducing emissions – a circle of platinum use that’s more impressive than a ring.
2. Feed the world
Nitric acid is a by-product of platinum which is used in fertilisers. Image: Pixabay
Another vital global sector that makes use of platinum catalysts is agriculture. Without synthetic fertilisers, we would not be able to produce nearly as much food as we need. Nitric acid is essential for producing those fertilisers and platinum is essential for producing nitric acid. Since 90% of the gauzes required for nitric acid are platinum, we may need to use more of it as we try to meet the global food challenge.
3. Good for your health
A pacemaker. Image: Steven Fruitsmaak@Wikimedia Commons
Platinum is extremely hard wearing, non-corrosive, and highly biocompatible, making it an excellent material to protect medical implants from acid corrosion in the human body. It is commonly used in pacemakers and stents. It is also used in chemotherapy, where platinum-based chemotherapeutic agents are used to treat up to 50% of cancer patients.
4. The fuel is clean
In addition to powering the cars of the present and reducing their environmental impact, platinum might well be crucial to the future of transport in the form of fuel cells. Platinum catalysts convert hydrogen and oxygen into clean energy, with water the only by-product.
5. Rags to riches
The Spaniards invaded the Inca Empire, South America, in 1532. Painted by Juan B Lepiani. Image: MALI@Wikimedia Commons
Amazingly, despite all this, platinum was once considered worthless - at least in Europe. In fact, it was considered a nuisance by the Spanish when they first discovered it in South America - as a corruption in the alluvial deposits they were earnestly mining and they would quite literally throw it away. It wasn’t until the 1780s that the Spanish realised it might have some value.
Because platinum is essential to so many aspects of our economy, there are concerns about supply meeting demand – particularly as nearly 80% is currently mined in South Africa, which has seen its mining industry repeatedly crippled by strikes in recent years.
Two Rivers platinum mine, South Africa. Image: Wikimedia Commons
Some believe the solution to the issue of supply is space mining, arguing the metal could be found in asteroids.
Others, such as researchers at MIT, are working to create synthetic platinum, using more commonly found materials. Neither approach is guaranteed to work but, given our increasing dependence on this precious metal, we could be more reliant on their success than we realise.
Around 10 million medical devices are implanted each year into patients, while one-third of patients suffer some complication as a result. Now, researchers in Switzerland have developed a way to protect implants by dressing them in a surgical membrane of cellulose hydrogel to make them more biocompatible with patients’ own tissues and body fluids.
‘It is more than 60 years since the first medical implant was implanted in humans and no matter how hard we have tried to imitate nature, the body recognises the implant as foreign and tends to initiate a foreign body reaction, which tries to isolate and kill the implant,’ says Simone Bottan at, who leads ETH Zurich spin-off company Hylomorph.
Hylomorph is a spin-off company of ETH Zurich, Switzerland. Image: ETH-Bibliothek@Wikimedia Commons
Up to one-fifth of all implanted patients require corrective intervention or implant replacement due toan immune response that wraps the implant in connective tissue (fibrosis), which is also linked with infections and can cause patients pain. Revision surgeries are costly and require lengthy recovery times.
The new membrane is made by growing bacteria in a bioreactor on micro-engineered silicone surfaces, pitted with a hexagonal arrangement of microwells. When imprinted onto the membrane, the microwells impede the formation of layers of fibroblasts and other cells involved in fibrosis.
25,000 people in the UK have a pacemaker fitted each year. Image: Science Photo Library
The researchers ‘tuned’ the bacteria, Acetobacter xylinum, to produce ca 800 micron-thick membranes of cellulose nanofibrils that surgeons can wrap snuggly around implants. The cellulose membranes led to an 80% reduction of fibrotic tissue thickness in a pig model after six weeks, according to a study currently in press. Results after three and 12 months should be released in January 2018.
It is hoped the technology will receive its first product market authorisation by 2020. First-in-man trials will focus on pacemakers and defibrillators and will be followed by breast reconstruction implants. The strategy will be to coat the implant with a soft cellulose hydrogel, consisting of 98% water and 2% cellulose fibres.
The membrane will improve the biocompatibility of implants. Video: Wyss Zurich
‘Fibrosis of implantables is a major medical problem,’ notes biomolecular engineer Joshua Doloff at Massachusetts Institute of Technology, adding that many coating technologies are under development.
‘[The claim] that no revision surgery due to fibrosis will be needed is quite a strong claim to make,’ says Doloff, who would also like to see data on the coating’s robustness and longevity.
The silicone topography is designed using standard microfabrication techniques used in the electronics industry, assisted by IBM Research Labs.
In recent years, companies in chemicals and other process industries have been giving much greater priority to process safety improvements, and a safety culture has been created among employees.
Consequently, industrial incidents have been decreasing, particularly in North America and Europe. In the US, a total in 2016 of 213 incidents – covering leaks, fires, explosions, and injuries – was the lowest for 10 years, according to figures from the American Chemistry Council’s (ACC) Responsible Care programme. The ACC’s member companies operate about 2,000 facilities – in 2016, half of its members had no incidents.
Now, chemical companies are confident they can reduce this even further. LyondellBasell, the US-based petrochemicals and polymers multinational, is aiming under its GoalZero programme for no incidents at all. BASF has set itself a goal of an annual rate of process safety incidents of at least 0.5 per one million working hours by 2025 – a quarter of the level in 2015.
Digitalisation should massively improve safety through initiatives like the use of sensors to signal deficiencies in equipment. Labelled Industry 4.0, digitisation represents the fourth generation of industrialisation. It has the potential to revolutionise the whole value chain in chemicals and other industries, particularly the manufacturing stages.
In manufacturing, digitalisation can lower costs and improve efficiencies from labour to research and development. In process safety, the main advantages are automation via plant monitoring sensors, drastically reducing manpower. Digitalisation can bring down maintenance costs by as much as 40%, and reduce total plant downtime by 30–50%.
Industry 4.0 is not just about collecting and delivering huge amounts of data to central points, but also about processing and analysing big data. With process safety, it provides analytics platforms for achieving significant improvements in safety performance. A key feature of the current digitalisation wave is that the automation system can be designed in-house by company employees, using computer tools supplied by software specialists. This enables companies to tailor how they use the new technology.
BASF scientists celebrate the installation of its new supercomputer. Image: BASF
BASF has embarked on an ambitious digitalisation programme with the aid of a supercomputer installed this summer at its main site at Ludwigshafen. A primary purpose of the supercomputer is to boost the company’s R&D performance, but it will also make a substantial contribution to advancing process safety.
Martin Brudermueller, BASF vice-chairman and chief technology officer, said in June 2017, ‘As long as we have the data we can use the supercomputer to analyse the causes of process safety incidents. But we are more likely to use it to introduce safer process systems – how we can predict and prevent accidents happening with the help of sensors. We will be able to work out, for example, the level of seriousness of warning signs from sensors, particularly in relation to the degradation of materials.’
Meanwhile, German speciality chemicals company, Evonik has seen its rate of incident frequency more than halved since 2008, likely due partly to the application of digital technologies. It wants to use automation to identify and prevent process safety risks.
German polymers and coatings producer Covestro has started collecting data from its plants worldwide on every leak, as well as minor and near-miss incidents. The data are carefully analysed to determine causes, with the results and corrective actions being publicised throughout the group.
Chemicals and other process industries have a long history of collecting, interconnecting and analysing data to gain added value, but OSIsoft has warned that the large amounts of data yielded by digitalisation will be a big test for existing IT systems.
Some process safety specialists fear that digitalisation could also lead technical staff to become disengaged from safety issues as responsibilities for checking equipment outside control rooms become automated.
‘To be successful, digitalisation projects in areas like process safety need to be matched properly with human factors,’ explains David Embrey, a consultant at Human Reliability in Dalton, Lancashire. ‘Some schemes can be too technology-centric, with not enough consideration of interaction with people.
‘The introduction of new technologies always brings new risks. For a start, will the digital technologies be accepted by the workforce when they are replacing tasks done by humans?’
The ultimate objective behind digitalisation is analytics. Huge amounts of data can be accumulated to create algorithms that tell companies what to do to increase productivity and raise efficiencies, for example through big cuts in downtime as a result of decreases in process safety incidents.
Some could argue the greatest threat to life as we know it is the slow, invisible war being fought against antibiotic resistant bacteria. The accidental discovery of penicillin by Fleming in the late 1920s revolutionised modern medicine, beginning with their use in the Second World War.
Over-prescription of these wonder drugs has allowed bacteria, which multiply exponentially, the ability to pick up on deadly cues in their environment at a phenomenal rate. They’re adapting their defence mechanisms so they’re less susceptible to attack. In theory, with an endless supply of different drugs, this would be no big deal.
Alexander Fleming, who discovered penicillin. Image: Wikimedia Commons
Unfortunately, the drug pipeline seems to have run dry, whilst the incidence of resistance continues to climb. For the gnarliest of infections, there’s a list of ‘drugs of last resort’, but resistance even to some of these has recently been observed. A report published by the World Health Organisation echoes these warnings – of the 51 new drugs in clinical development, almost 85% can be considered an ‘upgraded’ version of ones on the market right now. These drugs are a band aid on a snowballing problem.
Are viruses the answer?
Bacteriophages, or phages for short, are viruses that infect only bacteria, wreaking havoc by hijacking cellular machinery for their growth and development.
A bacteriophage. Image: Vimeo
Phages can find themselves in one of two different life cycles: virulent and temperate. The first involves constant viral replication, killing bacteria by turning them inside out (a process known as lysis). The second life cycle allows the phage in question to hitch a ride in the cell it infects, integrating its genetic material into the host’s and in doing so, propagating without causing immediate destruction. It’s the former that is of value in phage therapy.
Long before Fleming’s discovery, phages were employed successfully to treat bacterial infections. In areas of Eastern Europe, phages have been in continuous clinical use since the early part of the 20th century.
Why did their use not take off like that of penicillin’s in the West? ‘Bad science’ that couldn’t be validated in the early days proved to be disheartening, and phages were pushed to the wayside. Renewed interest in the field has come about due to an improvement in our understanding of molecular genetics and cell biology.
Phages are highly specific and, unlike antibiotics, they don’t tamper with the colonies of bacteria that line our airways and make up a healthy gut microbiome. As they exploit an entirely different mode of action, phages can be used as a treatment against multiple drug-resistant bacteria.
Repeated dosing may not even be necessary – following initial treatment and replication of the phage within infected cells, cell lysis releases ever more phages. Once the infection is cleared, they’re excreted from the body with other waste products.
What is holding it back?
A number of key issues must be ironed out if phage therapy is to be adopted to fight infection as antibiotics have. High phage specificity means different phage concoctions might be needed to treat the same illness in two different people. Vast libraries must be created, updated and maintained. Internationally, who will be responsible for maintenance, and will there be implications for access?
Scientists are looking at new ways to tackle antibiotic resistance. Video: TEDx Talks
Despite proving a promising avenue for (re)exploration, under-investment in the field has hindered progress. Bacteriophage products prove hard to patent, impacting the willingness of pharmaceutical companies investing capital. AmpliPhi Biosciences, a San Diego-based biotech company that focuses on the ‘development and commercialization of novel bacteriophage-based antibacterial therapeutic,’ was granted a number of patents in 2016, showing it is possible. This holds some promise – viruses might not save us yet, but they could be well on their way to.
Often, the pharmaceutical industry is characterised as the ‘bad guy’ of equality in healthcare. This is particularly evident in the United States, with cases such as Martin Shkreli, whose company Turing Pharmaceuticals infamously increased its leading HIV and malaria drug by over 50 times its value overnight, and a lack of regulation in advertising. The latter is accused of influencing prescriptions of certain brands based on consumer demand, which could lead to unnecessary treatment and addiction.
With stories like these dominating the media, it is no wonder the public if often found to harbour a negative view towards ‘Big Pharma’. However, the actions and motives of this industry are rarely fully understood. Here are five facts about pharmaceutical manufacturing you might not know:
1. Out of 5,000-10,000 compounds tested at the pre-clinical stages, only one drug will make it to market
The drug discovery and development process explained. Video: Novartis
This may seem like slim odds, but there are many stages that come before drug approval to make sure the most effective and reliable product can be used to treat patients.
There are four major phases: discovery and development; pre-clinical research, including mandatory animal testing; clinical research on people/patients to ensure safety; and review, where all submitted evidence is analysed by the appropriate body in hopes of approval.
2. If discovered today, aspirin might not pass current FDA or EMA rules
Some older drugs on the market would not get approval due to safety issues. Image: Public Domain Pictures
Problems with side effects – aspirin is known to cause painful gastrointestinal problems with daily use – mean that some older drugs that remain available might not have gained approval for widespread use today. Both the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) run programmes that monitor adverse side effects in users to keep consumers up-to-date.
Tighter regulation and increased competition mean that the medicines we take today are arguably more effective and safer than ever.
3. The average cost of drug development has increased by a factor of 15 in 40 years
Back in the 1970s, the cost to produce a drug from discovery to market was $179 million. Today, drug companies shell out $2.6 billion for the same process – a 1,352% increase! Even considering inflation rates, this number is significantly higher.
With the average length of time needed to develop a drug now 12 years, time is an obvious reason for the high costs. However, the difficulty of finding suitable candidates at the discovery stage is also to blame. Pre-clinical stages can be resource-intensive and time-consuming, making pharmaceutical companies look towards other methods, such as the use of big data.
4. The US accounts for nearly half of pharmaceutical sales
The Statue of Liberty. Over 40% of worldwide medicines sales are made by US companies. Image: Wikimedia Commons
The US is the world-leader in pharmaceutical sales, adding $1.2 trillion to the economic output of the US in 2014 and supporting 4.7 million jobs. The country is also home to the top 10 performing pharmaceutical companies, which include Merck, Pfizer, and Johnson & Johnson.
While the EU’s current share is worth 13.5%, this is expected to fall by 2020 with emerging research countries, such as China, projected to edge closer to the US with a share of 25%.
5. Income from blockbuster drugs drives research into rare diseases
Rare diseases are less likely to receive investment for pharmaceutical research. Image: Pixabay
Diseases that affect a large proportion of the worldwide population, such as cancer, diabetes, or depression, are able to produce the biggest revenue for pharmaceutical companies due to the sheer volume of demand. But rarer diseases are not forgotten, as research into these illnesses is likely funded by income from widespread use of the aforementioned medicines.
Rare – or ‘orphan’ – diseases are those that affect a small number of the population, or diseases that are more prevalent in the developing world. With the increasing cost of producing a drug, it becomes risky for pharmaceutical companies to create a fairly-priced drug for a small fraction of patients.
However, this seems to be changing. Researchers from Bangor University, UK, found that pharmaceutical companies that market rare disease medicines are five times more profitable than those who do not, and have up to 15% higher market value, which could finally provide a financial incentive for necessary research.
Energy storage is absolutely crucial in today’s world. More than just the batteries in our remote controls, more even than our mobile phones and laptops; advancements in energy storage could solve the issues with renewable power, preserving energy generated at times of low demand.
Advances in lithium-ion batteries have dominated the headlines in this area of late, but a variety of developments across the field of electrode materials could become game changers.
1. In the beginning, there were metals
The Daniell cell, an early battery from 1836 using a zinc electrode. Image: Daderot
Early batteries used metallic electrodes, such as zinc, iron, platinum, and lead. The Daniell cell, invented by British chemist John Frederic Daniell and the historical basis for the volt measurement, used a zinc electrode just like the early batteries produced by scientists such as Alessandro Volta and William Cruickshank.
Alterations elsewhere in the Daniell cell substantially improved its performance compared with existing battery technology and it became the industry standard.
2. From acid to alkaline
Waldemar Jungner: the Swedish scientist who developed the first Nickel-Cadmium battery. Image: Svenska dagbladets årsbok 1924
Another major development in electrode materials came with the first alkaline battery, developed by Waldemar Jungner using nickel (Ni) and cadmium (Cd). Jungner had experimented with iron instead of cadmium but found it considerably less successful.
The Ni–Cd battery had far greater energy density than the other rechargeable batteries at the time, although it was also considerably more expensive.
3. Smaller, lighter, better, faster
Organic materials for microbattery electrodes are tested on coin cells. Image: Mikko Raskinen
Want your electronic devices to be even smaller and lighter? Researchers from Aalto University, Finland, are working on improving the efficiency of microbatteries by fabricating electrochemically active organic lithium electrode thin films.
The team use lithium terephthalate, a recently found anode material for a lithium-ion battery, and prepare it with a combined atomic/molecular layer deposition technique.
4. There’s more to life than lithium
50-70% of the world’s known lithium reserves are in Salar de Uyuni, Bolivia. Image: Anouchka Unel
Lithium-ion batteries have dominated the rechargeable market since their emergence in the 1990′s. However, the rarity of material means that, increasingly, research and development is focused elsewhere.
Researchers at Stanford University, USA, believe they have created a sodium ion battery with the same storage capacity as lithium but at 80% less cost. The battery uses sodium salt for the cathode and phosphorous for the anode.
5. Back to the start
Advances are also being made in the electrode materials used in artificial photosynthesis. Video: TEDx Talks
Hematite and other cheap, plentiful metals are being used to create photocatalytic electrode materials by a team of scientists from Tianjin University, China. The approach, that combines nanotechnology with chemical doping, can produce a photocurrent more than five times higher than current approaches to artificial photosynthesis.
You can read an interview with the recipient of SCI’s 2017 Castner Medal, who delivered the lecture Developments in Electrodes and Electrochemical Cell Design, here.
Concrete is a common fixture in the building blocks of everyday life. Image: US Navy@Wikimedia Commons
Concrete is the most widely used construction material in the world, with use dating back to Ancient Egypt.
Predictably, our needs concerning construction and the environment have changed since then, but the abundance of concrete and its uses have not. We still use concrete to build infrastructure, but building standards have changed dramatically.
Dubai city landscape. Concrete is predominantly used in residential buildings and infrastructure. Image: Pixabay
Its immense use, from house foundations to roads, means that problems cannot easily be fixed through removal of the old and replacement with the new. Such constraints have seen researchers focus on unique ways to solve the problems that widespread use of concrete can create for industry.
In the UK, four universities have created ‘self-healing’ concrete as part of a collaborative project, known as Resilient Materials 4 Life (RM4L), to produce materials that can repair themselves. Currently, monitoring and fixing building materials costs the UK construction industry £40 billion a year.
Microcapsules are mixed through the cement which then break apart when tiny cracks begin to appear. The group have also tested shape-memory polymers that can close the cracks together closely and prevent further damage. These techniques have shown success in long-term trials and in scaled-up structural elements, said Prof Bob Lark, speaking to Materials World magazine. Lark is lead investigator for RM4L at Cardiff University.
RM4L already has 20 industry partners and there is hope that, in the future, technologies can be transferred to other materials, although it has not yet reached the commercialisation stage.
Lark said: ‘What we have to do now is improve the reliability and reduce the cost of the techniques that we have developed so far, but we also need to find other, more efficient and perhaps more tailored approaches that can ensure we address the full range of damage scenarios that structures can experience.’
Making concrete eco-friendly
The abundance of concrete globally comes with an equally large carbon footprint, with concrete production equating to 5% of the annual CO2 produced by humans. For every tonne of concrete made, we contribute one tonne of CO2 to our surroundings. It is primarily due to the vast quantity produced each year that leads to this high level of environmental damage, as concrete is otherwise a ‘low impact’ material.
This inherent characteristic has led some scientists to develop stronger types of concrete. Here, the building features and low environmental impact of the material remain the same, but because less is needed of the stronger concrete to perform the same job, carbon emissions are reduced significantly.
Another method aimed at tackling emissions is the ‘upcycling’ of concrete. At UCLA, researchers have created a closed-loop process by using carbon capture from power plants that would be used to create a 3D-printed CO2NCRETE.
‘It could be a game-changer for climate policy,’ said Prof JR DeShazo, Director at the Luskin Centre for innovation, UCLA. ‘It takes what was a problem and turns it into a benefit in products and services that are going to be very much needed and valued in places like India and China.’
The Mary Rose is a maritime archaeologist’s dream – a Tudor time capsule containing not only the structure of the naval warship itself, but more than 26,000 artefacts, providing invaluable historic insight. Raised in 1982 – 11 years after its discovery in the Solent – restoring and conserving the wreck and its many treasures required not only countless hours of work, but many ingenious scientific solutions.
1. Pond snails helped preserve the timbers
To prevent the growth of fungi and microbes on the wooden frame, the Mary Rose restoration team used common pond snails, which ate the wood-degrading organisms but left the wood untouched – as well as employing more commonly known methods, such as low-temperature storage and chemical preservation.
2. Its water was replaced with polyethylene glycol
A technician services the spraying system. Image: The Mary Rose Trust
To prevent the wood from warping, cracking and shrinking by up to 50% as the water evaporated, it was sprayed regularly with filtered, recycled water. In 1994, the conservation team began to gradually replace the water in the cellular structure of the wood with polyethylene glycol (PEG). A low-molecular-weight PEG was used for the first nine years, before seven years of spraying with a higher weight PEG to strengthen the outer layer. The remains were then carefully air dried – a process that was completed in 2016.
3. Crew members brought to life with virtual 3D reconstructions
3D virtual models of the crew and artefacts have provided a deeper look at Tudor history. Image: Pixabay
Mary Rose researchers used 3D technology to create virtual representations of crew members, clothing, and tools, to encourage scientists worldwide to participate in the project. Models have provided the opportunity to investigate the lifestyles led by the Tudors.
4. Intact cannons were found
Bronze and iron cannons found on the Mary Rose were preserved using different methods. Pictured are a bronze (front) and iron (back) cannon. Image: Wikimedia Commons
Gunpowder and heavy artillery became increasingly used in infantry and on ships around the time that the Mary Rose was built, so many of the cannons and guns found on board the ship were made from metals such as iron and bronze. These metals are difficult to preserve after submersion in fresh water. Bronze cannons were lightly bathed in a sodium sesquicarbonate solution, and iron preserved using hydrogen reduction, to prevent oxidation, which can lead to the corrosion of these artefacts.
Divers who have discovered around 60 shipwrecks in the Black Sea face a similar problem – perfectly preserved from the unusual anoxic conditions of the water – leading them to decide to study objects using 3D printing instead of bringing the ships ashore.
5. Part of the Mary Rose has been to space
The space shuttle Endeavour orbits the Earth. Image: Public Domain Pictures
For the shuttle Endeavour’s final trip to space in 2011, astronauts elected to take with them a parrel ball – used in sailing rigs – from the Mary Rose, as part of a long tradition of travelling in space with commemorative items. The shuttle took off from Kennedy Space Centre for the International Space Station on 16 May 2011. The artefact spent a total of 17 days in space, after an extended period of decontamination in preparation to make it suitable for space travel.
Interested in the Mary Rose? Why not register to attend Mary Rose - From Seabed to Showcase, the Making of a British Icon – our free Public Evening lecture with Helen Bonser-Wilton, Chief Executive of the Mary Rose Trust, in London on 25 November.