The world’s biggest ever survey of public opinion on climate change was published on 27th January, covering 50 countries with over half of the world’s population, by the United Nations Development Programme (UNDP) and the University of Oxford. Of the respondents, 64% believe climate change is a global emergency, despite the ongoing Covid-19 pandemic, and sought broader action to combat it. Earlier in the month, US President Joe Biden reaffirmed the country's commitment to the Paris Agreement on Climate Change.
It is possible that the momentum, combined with the difficulties many countries currently face, may make many look again to geoengineering as an approach. Is it likely that large scale engineering techniques could mitigate the damage of carbon emissions? And is it safe to do so or could we be exacerbating the problem?
The term has long been controversial, as have many of the suggested techniques. But it would seem that some approaches are gaining more mainstream interest, particularly Carbon Dioxide Removal (CDR) and Solar Radiation Modification (SRM), which the 2018 Intergovernmental Panel on Climate Change (IPCC) report for the UN suggested were worth further investigation (significantly, it did not use the term "geoengineering" and distinguished these two methods from others).
One of the most covered CDR techniques is Carbon Capture and Storage (CCS) or Carbon Capture, Utilisation, and Storage (CCUS), the process of capturing waste carbon dioxide, usually from carbon intensive industries, and storing (or first re-using) it so it will not enter the atmosphere. Since 2017, after a period of declining investment, more than 30 new integrated CCUS facilities have been announced. However, there is concern among many that it will encourage further carbon emissions when the goal should be to reduce and use CCS to buy time to do so.
CDR techniques that utilise existing natural processes of natural repair, such as reforestation, agricultural practices that absorb carbon in soils, and ocean fertilisation are areas that many feel could and should be pursued on a large scale and would come with ecological and biodiversity benefits, as well as fostering a different, more beneficial relationship with local environments.
A controversial iron compound deposition approach has been trialled to boost salmon numbers and biodiversity in the Pacific Ocean.
The ocean is a mostly untapped area with huge potential and iron fertilisation is one very promising area. The controversial Haida Salmon Corporation trial in 2012 is perhaps the most well-known example and brings together a lot of the pros and cons frequently discussed in geoengineering — in many ways, we can see it as a microcosm of the bigger issue.
The trial deposited 120 tonnes of iron compound in the migration routes of pink and sockeye salmon in the Pacific Ocean 300k west of Haida Gwaii over a period of 30 days, which resulted in a 35,000km2, several month long phytoplankton bloom that was confirmed by NASA satellite imagery. That phytoplankton bloom fed the local salmon population, revitalising it — the following year, the number of salmon caught in the northeast Pacific went from 50 million to 226 million. The local economy benefited, as did the biodiversity of the area, and the increased iron in the sea captured carbon (as did the biomass of fish, for their lifetimes).
Small but mighty, phytoplankton are the laborers of the ocean. They serve as the base of the food web.
But Environment Canada believes the corporation violated national environmental laws by depositing iron without a permit. Much of the fear around geoengineering is how much might be possible by rogue states or even rogue individuals, taking large scale action with global consequences without global consent.
The conversation around SRM has many similarities — who decides that the pros are worth the cons, when the people most likely to suffer the negative effects, with or without action, are already the most vulnerable? This is a concern of some of the leading experts in the field. Professor David Keith, an expert in the field, has publicly spoken about his concern around climate change and inequality, adding after the latest study that, "the poorest people tend to suffer most from climate change because they’re the most vulnerable. Reducing extreme weather benefits the most vulnerable the most. The only reason I’m interested in this is because of that."
But he doesn't believe anywhere near sufficient research has been done into the viability of the approach or the possible consequences and cautions that there is a need for "an adequate governance system in place".
There is no doubt that the research in this field is exciting but there are serious ethical and governance problems to be dealt with before it can be considered a serious component of an emissions reduction strategy.
Every garden centre will currently bombard you with colourful displays of seed packets (figure 1). Each contains tiny grains of dormant life. Provided with water, warmth, suitable soil or compost and eventually light (figure 2) that resting grain will transform into the roots and shoots of a new plant.
Image 1: Racks of seed packets
Inside that seed cascades of genes trigger enzymes which release energy from stored starch and in some cases lipids. As a result, the seed coat opens and a root emerges which takes in supplies of water and nutrients. Shoots follow which grow upwards towards the light. They turn green as chlorophyll is manufactured and photosynthesis commences. At that point the seemingly inert grain becomes a self-sustaining living plant. Root and shoot growth result from active cell divisions with genetic controls determining the form and functions of each organ.
Image 2: Germinating seeds and the correct conditions
Each seed’s compliment of genes will determine what type of plant develops. But it is the environment provided by the gardener which determines the plant’s success. Careful and accurate husbandry results in succulent, health-promoting vegetables or colourful, vigorous flowers. Seedlings of some plants may be given nursery treatment before being placed into the garden’s big wide world. Providing protection in the early stages either in a green house or under cloches for many annual flowers and most vegetables boosts growth (figure 3) and eventually the quality of the produce.
Image 3: Legumes grown under protection
This does require time, skill and investment by the gardener. An alternative is purchasing seedlings from garden centres (figure 4). But an element of caution is required. These plants will have been raised under protection. Hence planting directly into the garden means still need care and attention. Frost protection and watering are essential, otherwise poor results may follow.
Image 4: Garden centre seedlings
Direct sowing seeds into garden soil is another alternative. Hardy vegetables and annual flowers may be cultured in this way. The requirements for success are a fertile soil with a fine tilth, that means it is free from stones and consists of uniform, aggregated particles allowing unimpeded movement of air and water.
Vegetables such as beetroot, carrots and parsnips will grow vigorously given these conditions. Hardy annuals such as African daisy, larkspur, love-in-the-mist, marigold and nasturtium will also thrive from direct sowings. Success in both garden departments depends on watering during dry spells and supplementary nutrition. Avoid nitrogenous fertilizers as these will encourage leaf growth whereas phosphate (P) and potassium (K) will promote root and flower formation.
The Big Picture
In 2018, UK CO2 emissions totalled to roughly 364 million tonnes. This was 2.4% lower than 2017 and 43.5% lower than 1990. The image below shows how much each individual sector contributed to the total UK carbon dioxide emissions in 2018. As can be seen, large emitting sectors include: energy supply, transport and residential. For this reason, CO2 emission trends from these sectors are discussed in this article.
Figure 1 Shows the percentage contribution toward Total UK Greenhouse Gas Emissions per Sector (2018) Figure: BEIS. Contains public sector information licensed under the Open Government Licence v1.0.
In 2018, the transport sector accounted for 1/3rd of total UK CO2 emissions. Since 1990, there has been relatively little change in the level of greenhouse gas emissions from this sector. Historically, transport has been the second most-emitting sector. However, due to emission reductions in the energy supply sector, it is now the biggest emitting sector and has been since 2016. Emission sources include road transport, railways, domestic aviation, shipping, fishing & aircraft support vehicles.
The main source of emissions are petrol and diesel in road transport.
Ultra-low emission vehicles (ULEV) can provide emission reductions in this sector. Some examples of these include: hybrid electric, battery electric and hydrogen fuel cell vehicles. In 2018, there were 200,000 ULEV’s on the road in the UK. In addition to this, there was a 53% increase in ULEV vehicle registration compared to 2016. In 2018, UK government released the ‘Road to Zero Strategy’, which seeks to see 50% of new cars to be ULEV’s by 2030 and 40% of new vans.
Energy Supply Sector
In the past, the energy supply sector was the biggest emitting sector but, since 1990, this sector has reduced its greenhouse gas emissions by 60% making it the second-biggest emitting sector. Between 2017 and 2018, this sector accounted for the largest decrease in CO2 emissions (7.2%). Emission sources included fuel combustion for electricity generation and other energy production sources, The main sources of emission are use of natural gas and coal in power plants.
In 2015, the Carbon Price Floor tax changed from £9/tonne CO2 emitted to £18/ tonne CO2 emitted. This resulted in a shift from coal to natural gas use for power generation. There has also been a considerable growth in low-carbon technologies for power generation. All of these have contributed to emission reductions in this sector.
Figure 2 - Natural gas power plant
Out of the total greenhouse gas emissions from the residential sector, CO2 emissions account for 96%. Emissions from this sector are heavily influenced by external temperatures. For example, colder temperatures drive higher emissions as more heating is required.
In 2018, this sector accounted for 18% of total UK CO2 emissions. Between 2017 and 2018, there was a 2.8% increase in residential emissions. Overall, emissions from this sector have dropped by 16% since 1990. Emission sources include fuel combustion for heating and cooking, garden machinery and aerosols. The main source of emission are natural gas for heating and cooking.
The UK has reduced CO2 emissions by 43.5% since 1990. However, further emission reductions are required to meet net-zero targets. The energy supply sector has reduced emissions by 60% since 1990 but remains the second biggest emitter. In comparison to this, emission reductions in the residential sector are minor. Yet, they are still greater than the transport sector, which has remained relatively static. Each of these sectors require significant emission reduction to aid in meeting new emission targets.
You are a student of an agricultural discipline and suddenly you are asked a question about climate change. ‘It’s going to get warmer in general,’ you start to say, ‘more variable with bigger storms and droughts.’
You sound confident, but deep down you know that you’re not so sure about what will happen to pesticides. Will everything be better in the future? You need some hard science so you can make up your own mind - with so little time, where will you find that science? Help is at hand!
Here are five facts to give you all you need, to sound like the expert that you want to be.
1 - Pesticides won’t always degrade faster if it’s warmer
Chemical reactions do go faster at warm temperatures. So you would expect that pesticides would degrade faster in Spain than in Sweden.
However, hot temperatures are often linked to dry soils. Microbial reaction rates in dry soils are slower than in wet soils, although pure chemical reactions don’t have this limitation.
Temperature and moisture maps of Europe. Images: Atlas of the biosphere
The rates in Spain are four times faster due to temperature but five times slower due to moisture. The reaction rates may actually be faster in Sweden! You have to consider both temperature and moisture when you think about pesticide reaction rates in soil.
2 - Warmer water doesn’t necessarily contain more pesticide
Substances tend to get more volatile, the more that you heat them up. If you place a large body of water, like a lake, in a slightly warmer climate, the amount of pesticide in it will decrease slightly.
At a fixed temperature and pressure, there is a constant ratio of concentrations in the air and water phases. But change the temperature and the ratio changes too, and more of the substance ends up in the air phase.
Losses to the air are really small for most pesticides, and contamination of water is usually a bigger concern. But it is still interesting to note that pesticides tend to leave hotter water and head up, up and away into the atmosphere.
3 - Cold weather can reduce pesticide leaching
Leaching is the process when water that trickles through the soil takes pesticide with it, on a journey that can end at the water table. In Europe, colder weather is often associated with wet weather, so you would normally expect the excess rain to carry the pesticide on its journey all the more. However, there are situations where this would be false.
Pesticide leaching has paused. Image: Richard Walker/ Flickr
1. Really cold weather
The ground freezes and nothing moves anywhere. When things start to warm up a bit, there could be a fair abundance of water trying to percolate through. The question is; will the ground stay frozen long enough so that the water runs off before it ever gets into the soil?
2. Cold weather can pull pesticide out of the water
The amount of pesticide in the water is balanced with the pesticide stuck to the soil – we usually call this sticking ‘sorption’. Several studies indicate that when the temperature drops, the balance swings away from the water and towards the solid. That means there is less available for leaching.
4 - A lot of rain doesn’t mean a lot of pesticide runoff
If you have heavy rain, you have several different factors.
First of all, the state of the soil has an impact – is it easy for the water to penetrate?
You also have the slope: steeply sloping land will lead to runoff earlier in the rain event than gently sloping land.
Finally, you have the pattern of rainfall. If it all falls at once, runoff will be much more likely, because there won’t be time for it to soak in.
The pattern of rainfall intensity is called the ‘erosivity’ of the rain. If you take the average erosivity over a long period of time, you can build a map of where erosivity is generally highest.
There are some pretty wet places with low erosivity, such as Ireland, Denmark and Northern Germany. Some dry spots in Spain and southern Italy have high erosivity. Total rainfall (left) and erosivity (right). Image: Panagos et al. (2015)
5 - The main effects of climate change on pesticide fate will not be due to physics or chemistry.
There is a clear consensus that the climate is changing. The climate influences a range of agro-ecological features, for example the timing of pest infestation or infection, the rate of plant growth and the soil conditions, such as the organic carbon content, moisture status and the extent of cracking.
Would you choose to sow oilseed rape? It partly depends on the climate. Image: Simon Rowe/Flickr
The indirect effects of climate on pesticide fate can be considered as a tension between the twin societal drivers to maximise production while maintaining environmental safety. Will the indirect effects outweigh the direct effects? I think they will and I am not alone.
So, the biggest effect of climate change on pesticide fate is not physics and chemistry but a series of responses of farmers, consumers, producers, retailers and politicians in how we all decide to react to the changes.
A catalyst is a substance that reduces the energy input required for a reaction – many industrial processes use a catalyst to make them feasible and economic.
There are many types of catalysts for different applications, and zeolite catalysts are used commercially to reduce the negative effects of exhaust fumes from diesel engines and produce fuels more efficiently. Catalysts can be studied with light, in a process called spectroscopy, to help understand how they work.
My PhD research has greatly benefitted from the use of synchrotron radiation. It helped me to gain detailed mechanistic insight into how the zeolite catalyst works. To date, I have completed four scientific visits at the Diamond Light Source, which is the UK’s national synchrotron facility, located in Oxfordshire.
Diamond Light Source is the UK’s national synchrotron science facility, located in Oxfordshire. It was opened in June 2014 to support industrial and academic research.
What is a synchrotron?
Diamond Light Source. Image credit: Diamond Light Source
A synchrotron generates very bright beams of light by accelerating electrons close to the speed of light and bending them through multiple magnets. The broad spectrum of light produced, ranging from X-rays to infrared (IR) light, is selectively filtered at the experimental laboratories (beamlines), where a specific region of the electromagnetic spectrum is utilised. My work uses the IR part of the electromagnetic spectrum. IR light has the right energy to probe bond stretches and deformations, allowing molecular observations and determination.
A highlight from last year has been attending a joint beamtime session with Prof Russell Howe and Prof Paul Wright at Diamond’s IR beamline (MIRIAM, B22). The MIRIAM beamline is managed by Dr Gianfelice Cinque and Dr Mark Frogley.
The synchrotron enables us to capture the catalyst in action during the methanol to hydrocarbons reaction. The changes in the zeolite hydroxyl stretches we observe correlate with the detection of the first hydrocarbon species downstream.
A cartoon illustration of the evolution of the zeolite hydroxyl stretch band during the methanol to hydrocarbons process. Image credit: Ivalina Minova
What is it like researching at Diamond?
My access to Diamond is typically spread over six-month intervals. To secure beamtime, we have to submit a two-page research proposal. This is assessed by a scientific peer review panel and allocated three or four days to complete the proposed experiments.
Dr Helen Sharman always knew that she wanted to study science at university, and considered biology, medicine, and physics before deciding on chemistry.
‘After – well, in fact, during – my degree, I always knew I wanted to go into industry,’ Dr Sharman said, as she began her fully booked evening lecture at SCI’s London headquarters. ‘I just thought chemistry was going to be a fabulous way of keeping my options open.’
How right she was – Dr Helen Sharman’s CV reads like an especially far-fetched answer to the question, ‘What would you like to be when you grow up?’
Before taking the job for which she became best known, Helen Sharman worked for General Electric, developing screens and coatings for electronics; and then as a chemist for the confectioner Mars, where she was part of the team that developed the innovative Mars Ice Cream – a canny solution to the seasonal dip in chocolate bar sales over summer.
‘I then moved on in my job to the next department – the chocolate department,’ she continued, a smattering of oohs and aahs returning from the audience.
‘One of the tasks I had to do every day was to trundle down to the production line and take samples of chocolate and’ – she whispered – ‘taste it’.
‘There I was, using my chemistry, in industry, in a production environment, tasting chocolate. I loved it.
‘What better job could anybody have?’
There can’t be many. But one day, while driving home, Helen Sharman heard five words on a radio advert that could tempt her away even from her dream job at a chocolate factory.
‘Astronaut wanted: no experience necessary’.
No experience necessary
The advert was for Project Juno, the private British space programme to select the country’s first ever astronaut, who would join three Russian astronauts on the Mir Space Station for eight days.
‘They were looking for people who were qualified in something like science, engineering, medicine – something technical – and someone who did a practical job with their hands, because the ultimate astronaut was going to need to do experiments in space,’ Dr Sharman explained.
The astronaut would also need learn Russian in preparation for 18 months of training in Star City, near Moscow, before embarking on an eight-day mission orbiting Earth on the Mir Space Station with three Russian astronauts.
Finally, Dr Sharman explained, the successful applicant had to be reasonably physically fit, or more specifically, healthy – ‘You can train a certain fitness if you’ve got an internal health’, she said.
Of the 13,000 initial applicants, Helen Sharman made it to the final two. She and RAF Major Timothy Mace would not find out until three months before departure who was first choice and who was backup.
Meanwhile, the two prospective astronauts underwent rigorous training and tests, flight simulations, and experienced the illusion of weightlessness (achieved through parabolic flight in an aeroplane) – ‘This is the part of the training that every astronaut agrees, by far, is the best bit’, Dr Sharman said.
Experiments in space
Of course, it was Helen Sharman who was selected, and on 18 May 1991, she boarded the Soyuz TM-12 mission to the Mir Space Station with Soviet cosmonauts Anatoly Artsebarsky and Sergei Krikalyov, experiencing 3G of acceleration on launch and in 530 seconds – less than nine minutes – was 400km away from the earth’s surface. The Soyuz capsule orbited the planet for two days before manually docking with Mir.
With no time to waste, she began work on the experiments she was there to do for the next eight days…
Nature is providing the inspiration for a range of novel self-repairing materials – by mimicking bone healing to fix ceramics, for instance, or using bacteria to heal a ‘wound’ in an undersea power cable.
Self-healing polymers are already well known. A familiar example is self-healing composite aircraft wings: if a crack appears, microcapsules in the composite matrix rupture, releasing ‘sealant’ into the crack to repair it. Recently, however, researchers have expanded the range of ‘repairable’ substances to include other promising materials – including rubber, ceramics and even electronic circuits.
Paul Race, senior lecturer in biochemistry at Bristol University, UK, heads a multi-disciplinary project to develop new types of self-healing materials. The three-year project, called Manufacturing Immortality, is in partnership with six other UK universities and involves biologists, chemists and engineers. ‘Our aim is to create new materials that can regenerate – or are very difficult to break – by combining biological and non-biological components – such as bacteria with ceramics, glass or electronics,’ says Race, whose own research interests include the stereochemistry of antibiotics, and the activities of enzymes.
The project’s approach is quite different to most polymer-based self-healing technologies, which typically rely on simple hydrogen bonds and reversible covalent bonds. ‘There are limits to the polymer chemistry approach,’ he says. ‘We’re trying to take inspiration from biology, which uses much more elaborate and powerful approaches to achieve more dramatic repair.’
Self-healing rubber links permanent covalent bonds (in red) with reversible hydrogen bonds (green). Image: Peter and Ryan Allen/ Harvard press
As an example, Race refers to what happens when we break or bone or receive a bad cut, which triggers a cascade of events in which the body detects the damage and responds appropriately. The team’s work is aimed at three broad application areas: safety critical systems; energy generation; and consumer electronics.
Biocompatibility in the development of new medical treatments is becoming increasingly important. Implants are traditionally made of materials foreign to the human body – from titanium to silicone – that can cause issues with system toxicity that may lead the body to reject the implant.
Like the human body, a significant proportion of the make-up of hydrogels is water – 90% compared to the body’s 60% – making them a viable modern alternative to the current standard of implants.
At the moment, focus is on the development of hydrogels in drug delivery systems, although its potential stretches further.
Inspired by nature
One such example of hydrogel innovation was developed by researchers at the University of Michigan, US, and the University of Fribourg, Switzerland. Finding inspiration from the electric eel, the team created a flexible electrical device that could be used as a power source for implanted health monitors.
The electric eel generates power using transmembrane transport, whereby ion channels control the passage of cations and anions through the membrane in the eel’s electrocytes.
At rest, these ions cancel each other out. However, when triggered, the cation channels become more permeable, shifting the overall potential across the cell. In these instances, the eel can produce up to 600V of electricity.
‘The electric organs in eels are incredibly sophisticated; they’re far better at generating power than we are,’ said Michael Mayer, co-author and Biophysics Professor at the University of Fribourg. ‘But the important thing for us was to replicate the basics of what is happening.’
An electric eel. Image: Scott/Flickr
Firstly, the group dissolved sodium and chloride in the hydrogel and layers built by printing thousands of droplets of the salty gel these were alternated with hydrogel droplets of pure water. Each type of droplet could only conduct cations or anions.
Pressing cells together created a concentration gradient which is stimulated by an external electric current, creating a system similar to the electric eels.
By stacking 2,449 of these cells, Mayer says the hydrogel produced 100W, but the nature of the hydrogel’s internal resistance means the outputs of the cells is only 50µW. The team are now working to improve its efficiency.
‘Maybe the most obvious thing to think as a next step would be to try in some creative way to tap into the existing ionic gradients within the body. Much better of course would be a design where one could tap into metabolic energy to keep an artificial organ always charges,’ said Mayer.
‘That would be the ultimate achievement, but that’s very difficult to reach and we have not approached that part of the problem.’
Check out a report on the SCIdea challenge here
Victor Christou, CEO of Cambridge Innovation Capital and Head Judge gives an inspirational talk on entrepreneurship and innovation. All photos: Andrew Lunn/SCI
Robin Harrison, Global Innovation Director for Synthomer, competition judge and sponsor.
Lucinda Bruce-Gardyne, Founder of Genius Foods and competition judge, tells her inspiring story – the UK’s leading gluten-free bread, now distributed in Europe and Australia, started in her kitchen.
University of St Andrews’ EasyMed pitched an implant for Alzheimer’s treatment.
The judging panel was completed by Inna Baigozina-Goreli – a Managing Director at Accenture with over 25 years experience in management consultancy.
The University of Manchester’s Team LEAD were the competition’s first runners-up, with their low-cost desalination module.
The first of two finalists from UCL, ZRZ Tech presented an innovative treatment for obesity.
Team Protector brought an excellent proposal down from the University of Aberdeen – turning China’s food waste into a sustainable fertiliser.
Team Glucoguard were the second team from UCL to make it to the final, and their GM bacterium for diabetes treatment was particularly well received…
Team SalSoc Synthesis delivered an excellent solution to university lab waste, developed at Salford University.
Sharon Todd, Executive Director of SCI, rounds off the pitching session and invited the judges to deliberate.
Every team delivered a truly winning pitch…but only one could walk away with the £1,000 prize. Many congratulations to Camillo, Libby, and Jack, of UCL’s Team Glucoguard!
A quick reminder about @SCIupdate event that I am organising/speaking at "What a chemist needs to know about patents" https://t.co/KfO1XEqccB. It's pretty cheap, esp. if you book before 12 March, but if you have difficulty affording it please let me know and @EIP will try to help”
With a raft of developments in engineered timber, architects and designers and increasingly turning to wood as their material of choice. In advance of SCI’s Timber in Construction Materials event, here are five facts about this spectacularly versatile, sustainable material.
1. There’s a super-dense wood that’s as strong as steel, but six times lighter
Liangbing Hu and Teng Li pose with their chemically treated bulletproof wood. University of Maryland
A team at the University of Maryland (UMD), US, have made wood 12 times stronger and 10 times tougher than in its natural form.
Their process consists of boiling the wood in a bath of sodium hydroxide and sodium sulphite, heating it, then subjecting it to compression.
Leading the research, Liangbing Hu, assistant professor in UMD’s department of materials science, said, ‘This could be a competitor to steel or even titanium alloys, it is so strong and durable. It’s also comparable to carbon fibre, but much less expensive.’
The team shot bullet-like projectiles at their super wood to test it – predictably, they blew straight through natural wood, but were stopped by the new material.
The discovery could make even soft, fast-growing woods, such as balsa, more useful in buildings – offering a much quicker carbon payback than slower-growing denser hardwoods such as teak.
The researchers claim the process will work on any kind of timber. Many methods for densifying wood have been tried over the years, such as exposing the wood to steam or ammonia and then rolling it, like a steel bar, but the results have been less than ideal – particularly due to wood’s tendency to expand and contract in response to atmospheric water.
2. It doesn’t have to burn.
You’d be forgiven for associating wood with fire – but engineered timber products such as cross-laminated timber (CLT) have repeatedly demonstrated excellent fireproofing qualities in testing.
The moisture content of timber means that CLT panels char slowly and predictably. This creates an insulating layer that protects the core of the panel, allowing it to maintain its structural integrity for up to three hours.
3. Timber towers are coming
Proposed design for Sumitomo Forestry’s 2041 tower.
Picture a skyscraper. In your mind’s eye, it’s all steel and glass, right?
That’s set to change. Just this month, Japanese timber company Sumitomo Forestry revealed plans for the world’s tallest wooden building in Tokyo. At 350 metres, the proposed skyscraper is taller than any in the country – although taller buildings could crop up before it is built; Sumitomo plans to complete the tower to mark the company’s 350th anniversary in 2041.
The company plans for 90% of its hybrid structure to be wood – a whopping 185,000 cubic metres of timber are planned for use in the ‘braced tube structure’ that features minimal steel – the columns and beams will be hybrid steel and timber, and there will be some additional steel braces in the construction. The tower would contain a hotel, residential units, offices, and shops – surrounded by large, plant-covered balconies.
Today’s tallest timber structure is the Brock Commons Tallwood House, a student residence building at the University of British Columbia (UBC), Canada.
Standing at 53 metres, the 18-storey block was prefabricated off-site, and then constructed in just 70 days. The elevator and stair shafts were made from concrete, but the vertical columns and floor plates were constructed using glue-laminated timber – multiple layers of dimensioned lumber bonded by durable, moisture-resistant structural adhesives.
4. London is home to the world’s largest timber building
Not the tallest – the largest. Dalston Works – a 10-storey, 121-unit housing development in East London, was completed in 2017. You wouldn’t know from it’s outer appearance – it’s clad in brick – but from the first floor upwards, the walls, floors, ceiling, stairs and a lift core are all made from CLT.
It was designed by Waugh Thistleton – a firm that has pioneered use of CLT since 2003. The timber frame offers 50% less embodied CO2 (calculated by the amount of energy required in its production) than a traditional concrete frame, and locks in 2,600 tonnes of CO2.
5. Wood is 100% renewable (as long as it’s sustainably managed)
Unlike bricks and concrete, which rely on the extraction of a finite supply of raw materials, timber is truly renewable – that is, of course, if another tree is planted when one is felled. Timber also does not require the extreme levels of heat used in the production of steel.
Humans have been cultivating land to produce crops and rear animals for around 12,500 years. Since then, we have been continually improving and refining the processes we use, from the stone tools of the Neolithic Revolution to the machines of the modern day.
The next great leap in agricultural techniques could stem from the use of drones to improve the precision agriculture approach.
In a recent review, PwC estimated the market for drone-powered solutions in agriculture could be as much as US$32.4 billion. Recent breakthroughs in areas such as satellite imaging, remote sensing and meteorology, combined with the advances in drone technology, mean we could be on the cusp of the next great agricultural revolution.
Vineyards in Germany. Image: Taxiarchos228@Wikimedia Commons
In some cases, drones make use of available technology, but in a much more targeted way. In others, their flexibility means innovative approaches are possible. PwC identified key areas across the agricultural cycle where drones could make a substantive difference in farming.
Soil and field analysis
Drones could improve soil nutrient mapping. Image: Brian Boucheron
Early soil analysis informs seed planting patterns, irrigation techniques, and fertiliser use. Nutrient mapping has been a crucial component of precision agriculture since the introduction of GPS in the mid-1990s, and drones will take that further, with more detailed maps available.
Drone systems could vastly improve on the productivity of current farming methods. Image: Pixabay
Some startups have created drone-planting systems that they believe could achieve an uptake rate of 75%, by shooting pods containing both the seeds and necessary nutrients into the ground, as well as decreasing planting costs by 85%.
Aerial spraying by drones could be five times faster than current machinery. Crucially, drones’ ability to assess topography would mean equal coverage. Continued assessment by the drones could reveal production inefficiencies in specific areas, leading to faster and more targeted crop management.
Wheat aphid cluster. Image: Texas A&M AgriLife
Crop failure can lead to huge losses if not identified and responded to rapidly. Drones can carry devices that produce multispectral images, using both visible and near-infrared light to assess changes in the health of crops.
The Lake District – the Centre for Innovation Excellence in Livestock is based in the Yorkshire countryside. Image: Wikimedia Commons
In 2015, the UK government announced £68m in three new Centres for Agricultural Innovation as part of its Agri-Tech Strategy to make the UK a world leader in agricultural technologies.
Ministers at the time believed an agri-tech revolution was needed to meet global food and energy challenges and the UK would be ideally placed to lead the way, with its research centres, established agricultural sector, and global influence. The current government is clearly of the same mind, with ‘transforming food production’ a key area of the Industrial Strategy Challenge Fund.
The UK is not alone. Both China and Israel’s state aerospace companies are developing technologies for use in this area, as well as Japan’s Yamaha, the USA’s Lockheed Martin, Canada’s Aeryon, and Sweden’s CybAero.
PwC’s advanced analytics: Drones. Video: PwCCanada
But the next agricultural revolution isn’t quite here yet. As with any new technology, there are ongoing concerns about the use of drones in the private industry, but the main issue in the agricultural sector is about the technology: whether both drones and the equipment they would need to carry is sophisticated enough to deliver.
While other industries interested in using drones might be focusing on privacy and insurance issues, the agri-tech sector is pushing for further technological improvements, such as better quality sensors and cameras, as well as even more highly automated drones.
Precision agriculture has a way to go before it becomes the norm in farming. Image: Cesar Harada@Flickr
However, the funding commitments from states and private companies around the world, in addition to the speed of developments in recent years, suggests that drones could play a major role in the next stage of agricultural development. The tools of the future will likely be a far cry from the stone sickles of our ancestors.
Are you a UK or ROI-based student with a bright idea for a science-based innovation? Want to gain experience in developing that idea into a business plan? Put together a team and join SCI’s Bright SCIdea Challenge for a series of training videos from science-based industry experts and you could be selected to pitch your business to our expert panel, with the winning team walking away with £1,000! Register now to access free training materials! For full details, visit bit.ly/SCIdea2018
Precision medicine is often described as a new or emerging approach that will revolutionise healthcare, but it might be more accurate to describe it as an advancement on existing practice: after all, health treatment is already, where possible, personalised – according to environment, genes, and lifestyle – to maximise each patient’s outcome.
It is, however, a significant advancement. Considering a patient’s family history or diet is not the same as tailoring a treatment approach exactly to the patient’s genetic makeup and disease type, and the successes can be remarkable: ivacaftor, for instance, developed by Vertex Pharmaceuticals, treats the underlying causes of cystic fibrosis in patients with G551D mutations in the CFTR gene (around 5% of cases). It is considerably more effective and convenient than conventional approaches that focus on symptoms.
A lung cancer cell during cell division. Image: National Institutes of Health
The successes of precision medicine have led to widespread enthusiasm and investment. When President Obama announced the USA’s precision medicine initiative in 2015, he claimed that it ‘gives us one of the greatest opportunities for new medical breakthroughs that we have ever seen’.
President Obama speaks at the launch of the Precision Medicine Initiative in 2015. Video: Cystic Fibrosis Foundation
The UK government is also investing. Six regional centres of excellence for precision medicine were established by Innovate UK in 2015 to develop innovative technologies for healthcare, in Belfast, Cardiff, Glasgow, Leeds, Manchester, and Oxford, in addition to the Cambridge-based Precision Medicine Catapult technology and innovation centre. It has also been highlighted as an area of focus for the Industrial Strategy, with an extra £210m of funding announced as part of the 2017 white paper. The UK’s research strength, combined with NHS evidence, is seen as a major opportunity in this area.
Pushing the boundaries
Precision medicine is perhaps most common in oncology, where it is considered a leading innovation in treatment. Drugs designed to focus on specific tumours and molecules are regularly used to treat cancer patients. Radiomics, the practice of assessing tumour phenotypes through the analysis of quantitative features from medical images, is considered a crucial step forward in the field, as it enables doctors to better guide therapies and predict responses.
A collaborative project between the Moffitt Cancer Center and Dana-Farber Cancer Institute is using radiomics to non-invasively assess the molecular and clinical characteristics of lung tumours. Dr Robert Gillies, Chair of Moffitt’s Department of Cancer Imaging and Metabolism, explains the approach, ‘The core belief of radiomics is that images aren’t pictures, they’re data. We have to treat them as data. Right now, we extract about 1,300 different quantitative features from any volume of interest’. More information on this development is available here.
Another complex disease that could be revolutionised by precision medicine is diabetes, one of the fastest growing global health challenges. Researchers from the University of Sydney’s Charles Perkins Centre have identified three specific molecules that accurately indicate insulin resistance, or pre-diabetes, particularly when present together. Professor James, the senior author, believes that, ‘Once we can identify the molecules and other factors that contribute to pre-diabetes, we can customise treatments to suit patients’ specific make up and needs’. The study is available in the Journal of Biological Chemistry.
A note of caution
There are, however, concerns about the approach. Precision medicine can be extremely expensive – ivacaftor, for example, costs US$300,000 per year, per patient. Moreover, it only works on the 5% of patients with G551D mutations in the CFTR gene.
Another major concern is about the data on which precision medicine research applies. A study led by the Translational Genomics Research Institute, USA, suggests that the current approach in oncology is ‘more precise for those of European decent, and less precise for those whose ancestry is from Latin America, Africa and Asia’. Patients from underrepresented backgrounds risk being misdiagnosed and provided with inappropriate therapies, say the team. This study is available in BMC Medical Genomics.
Check out SCI on Twitter here
“One of the highlights of SCI's office decorating competition #wheresciencemeetschristmas ... not that they'll beat us in digital media!"😉🎅🏻🎄”
Check out SCI #agrifoodbecause on Twitter here
On the final day of the #agrifoodbecause competition, a look at some of the outstanding work being carried out in the field!
Latin America is setting the pace in clean energy, led by Brazil and Mexico. Renewables account for more than half of electricity generation in Latin America and the Caribbean – compared with a world average of about 22% – according to the International Energy Agency.
Brazil is one of the world’s leading producers of hydropower, while Mexico is a leader in geothermal power. Smaller countries in the region are also taking a lead. In Costa Rica, about 99% of the country’s electricity comes from renewable sources, while in Uruguay the proportion is close to 95%.
The Itaipu hydroelectric dam, on the border of Brazil and Paraguay, generated 89.5TWh of energy in 2015. Image: Deni Williams
At the same time, countries such as Chile, Brazil, Mexico and Argentina have adjusted their regulations to encourage alternative energy without having to offer subsidies. Some have held auctions for generation contracts purely for renewables.
Latin America’s renewable energy production is dominated by an abundance of hydropower, but there is strong growth potential for other sources of renewable energy. Wind and solar power are expected to account for about 37% of the region’s electricity generation by 2040, compared with current levels of about 4%, according to a report from Bloomberg New Energy Finance (BNEF).