Search Index
268 items found
- Exploring the solar system: Mercury | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Exploring the solar system: Mercury 13/12/24, 12:13 The closest planet to the Sun Mercury, the closest planet to the Sun, holds a significant place in our understanding of the solar system and serves as our first stepping stone in the exploration of the cosmos. Its intriguing history dates back to ancient times when it was studied and recorded by the Babylonians in their celestial charts. Around 350 BC the ancient Greeks, recognized that the celestial body known as the evening and morning star was, in fact, a single entity. Impressed by its swift movement, they named it Hermes, after the swift messenger of their mythology. As time passed, the Roman Empire adopted the Greek discovery and bestowed upon it the name of their equivalent messenger god, Mercury, a name by which the planet is known today. This ancient recognition of Mercury's uniqueness paved the way for our continued exploration and study of this fascinating planet. Mercury's evolution As Mercury formed from the primordial cloud of gas and dust known as the solar nebula, it went through a process called accretion. Small particles collided and gradually merged together, forming larger bodies called planetesimals. Over time, these planetesimals grew in size through further collisions and gravitational attraction, eventually forming the protoplanet that would become Mercury. However, the proximity to the Sun presented unique challenges for Mercury's formation. The Sun emitted intense heat and powerful solar winds that swept away much of the planet's initial atmosphere and surface materials. This process, known as solar stripping or solar ablation, left behind a relatively thin and tenuous atmosphere compared to other planets in the solar system. The intense heat also played a crucial role in shaping Mercury's surface. The planet's surface rocks melted and differentiated, with denser materials sinking towards the core while lighter materials rose to the surface. This process created a large iron-rich core, accounting for about 70% of the planet's radius. Mercury's lack of significant geological activity, such as plate tectonics, has allowed its surface to retain ancient features and provide insights into the early history of our solar system. The planet's surface is dominated by impact craters, much like the Moon. These craters are the result of countless collisions with asteroids and comets over billions of years. The largest and most prominent impact feature on Mercury is the Caloris Basin, a vast impact crater approximately 1,525 kilometres in diameter. The impact of such large celestial bodies created shockwaves and volcanic activity, leaving behind a scarred and rugged terrain. Scientists estimate that the period known as the Late Heavy Bombardment, which occurred around 3.8 to 4.1 billion years ago, was particularly tumultuous for Mercury. During this time, the inner planets of our solar system experienced a high frequency of cosmic collisions. These impacts not only shaped Mercury's surface but also influenced the evolution of other rocky planets like Earth and Mars. Studying Mercury's geology and surface features provides valuable insights into the early stages of planetary formation and the impact history of our solar system. Exploration history Our understanding of Mercury has greatly benefited from a series of pioneering missions that ventured close to the planet and provided valuable insights into its characteristics. Let's delve into the details of these key exploratory endeavours: Mariner 10 (1974-1975): Launched by NASA, Mariner 10 was the first spacecraft to conduct a close-up exploration of Mercury. It embarked on a series of three flybys, passing by the planet in 1974 and 1975. Mariner 10 captured images of approximately 45% of Mercury's surface, revealing its heavily cratered terrain. The spacecraft's observations provided crucial information about the planet's rotation period, which was found to be approximately 59 Earth days. Mariner 10 also discovered that Mercury possessed a magnetic field, albeit weaker than Earth's. MESSENGER (2004-2015): The MESSENGER mission, short for Mercury Surface, Space Environment, Geochemistry, and Ranging, was launched by NASA in 2004. It became the first spacecraft to enter into orbit around Mercury in 2011, marking a significant milestone in the exploration of the planet. Over the course of more than four years, MESSENGER conducted an extensive study of Mercury's surface and environment. It captured detailed images of previously unseen regions, revealing the planet's diverse geological features, including vast volcanic plains and cliffs. MESSENGER's data also indicated the presence of water ice in permanently shadowed craters near Mercury's poles, surprising scientists. Furthermore, the mission discovered that Mercury possessed a global magnetic field, challenging previous assumptions about the planet's magnetism. MESSENGER's observations greatly expanded our knowledge of Mercury's geology, composition, and magnetic properties. BepiColombo (2018-Present): The BepiColombo mission, a joint endeavour between the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA), aims to further enhance our understanding of Mercury. The mission consists of two separate orbiters: the Mercury Planetary Orbiter (MPO) developed by ESA and the Mercury Magnetospheric Orbiter (MMO) developed by JAXA. Launched in 2018, BepiColombo is currently on its journey to Mercury, with an expected arrival in 2025. Once there, the mission will study various aspects of the planet, including its magnetic field, interior structure, and surface composition. The comprehensive data collected by BepiColombo's orbiters will contribute significantly to our knowledge of Mercury and help answer remaining questions about its formation and evolution. These missions have played pivotal roles in advancing our understanding of Mercury. They have provided unprecedented insights into the planet's surface features, composition, magnetic field, and geological history. As exploration efforts continue, we can anticipate further revelations and a deeper understanding of this intriguing world. Future exploration While significant advancements have been made in understanding Mercury, there is still much more to learn. Scientists hope to explore areas of the planet that have not yet been observed up close, such as the north pole and regions where water ice may be present. They also aim to study Mercury's thin atmosphere, which consists of atoms blasted off the surface by the solar wind. Moreover, the advancement of technology may lead to the development of innovative missions to Mercury. Concepts such as landing missions and even manned exploration have been proposed, although the challenges associated with the planet's extreme environment and proximity to the Sun make such endeavours highly demanding. Nevertheless, the quest to unravel Mercury's mysteries continues, driven by the desire to deepen our knowledge of planetary formation, evolution, and the unique conditions that shaped this enigmatic world. Exploring the uncharted areas of Mercury, particularly the north pole, holds great scientific potential. The presence of water ice in permanently shadowed regions has been suggested by previous observations, and investigating these areas up close could provide valuable insights into the planet's volatile history and the potential for water resources. Additionally, studying Mercury's thin atmosphere is of significant interest. Comprised mostly of atoms blasted off the surface by the intense solar wind, understanding the composition and dynamics of this atmosphere could shed light on the processes that shape Mercury's exosphere. In conclusion, while significant progress has been made in unravelling the mysteries of Mercury, there is still much to explore and discover. Scientists aspire to investigate untouched regions, study the planet's thin atmosphere, and employ innovative mission concepts. The future may hold ambitious missions, including landing missions and potentially even manned exploration. As our knowledge and capabilities expand, Mercury continues to beckon us with its fascinating secrets, urging us to push the boundaries of exploration and expand our understanding of the wonders of the solar system. And with that we finish our journey into the history and exploration of Mercury and will move to Venus in the next article. Written by Zari Syed Related articles: Fuel for the colonisation of Mars / Nuclear fusion Project Gallery
- Silicon hydrogel contact lenses | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Silicon hydrogel contact lenses 13/12/24, 12:08 An engineering case study Introduction Contact lenses have a rich and extensive history dating back over 500 years; when, in 1508, Leonardo Di Vinci first conceived the idea. It was not until the late 19th century that the concept of contact lenses as we know them now were realised. In 1887 F.E.Muller was credited with making the first eye covering that could improve vision without causing any irritation. This eventually led to the first generation of hydrogel-based lenses as the development of the polymer, hydroxyethyl methacrylate (HEMA), allowed Rishi Agarwal to conceive the idea of disposable soft contact lenses. Silicon hydrogel contact lenses dominate the contemporary market. Their superior properties have extended wear options and have transformed the landscape of vision correction. These small but complex items continue to evolve, benefiting wearers worldwide. This evolution is such that the most recent generation of silicon hydrogel lenses have recently been released and aim to phase out all the existing products. Benefits of silicon hydrogel lenses There are many benefits to this material’s use in this application. For example, the higher oxygen permeability improves user comfort and experience through relatively increased oxygen transmissibility that the material offers. These properties are furthered by the lens’ moisture retention which allows for longer wear times without compromising on comfort or eye health. Hence, silicon hydrogel lenses aimed to eradicate the drawbacks of traditional hydrogel lenses including: low oxygen permeability, lower lens flexibility and dehydration causing discomfort and long-term issues. This groundbreaking invention has revolutionised convenience and hygiene for users. The structure of silicon hydrogel lenses Lenses are fabricated from a blend of the two materials: silicon and hydrogel. The silicon component provides high oxygen permeability, while the hydrogel component contributes to comfort and flexibility. Silicon is a synthetic polymer and is inherently oxygen-permeable; it facilitates more oxygen to reach the cornea, promoting eye health and avoiding hypoxia-related symptoms. Its polymer chains form a network, creating pathways for oxygen diffusion. Whereas hydrogel materials are hydrophilic polymers that retain water, keeping the lens moist and comfortable as it contributes to the lens’s flexibility and wettability. Both materials are combined using cross-linking techniques which stabilise the matrix to make the most of both properties and prevent dissolution. (See Figure 1 ). There are two forms of cross-linking that enable the production of silicon hydrogel lenses: chemical and physical. Chemical cross-linking involves covalent bonds between polymer chains, enhancing the lens’s mechanical properties and stability. Additionally, physical cross-links include ionic interactions, hydrogen bonding, and crystallisation. Both techniques contribute to the lens’s structure and properties and can be enhanced with polymer modifications. In fact, silicon hydrogel macromolecules have been modified to optimise properties such as: improved miscibility with hydrophilic components, clinical performance and wettability. The new generation of silicon hydrogel contact lenses Properties Studies show that wearers of silicon hydrogel lenses report higher comfort levels throughout the day and at the end of the day compared to conventional hydrogel lenses. This is attributed to the fact that they allow around 5 times more oxygen to reach the cornea. This is significant as reduced oxygen supply can lead to dryness, redness, blurred vision, discomfort, and even corneal swelling. What’s more, the most recent generation of lenses have further improved material properties, the first of which is enhanced durability and wear resistance. This is attributed to their complex and unique material composition, maintaining their shape and making them suitable for various lens designs. Additionally, they exhibit a balance between hydrophilic and hydrophobic properties which have traditionally caused an issue with surface wettability. This generation of products have overcome this through surface modifications improving comfort by way of improving wettability. Not only this, but silicon hydrogel materials attract relatively fewer protein deposits. Reduced protein buildup leads to better comfort and less frequent lens replacement. Manufacturing There are currently two key manufacturing processes that silicon hydrogel materials are made with. Most current silicon hydrogel lenses are produced using either cast moulding or lathe cutting techniques. In lathe cutting, the material is polymerised into solid rods, which are then cut into buttons for further processing in computerised lathe - creating the lenses. Furthermore, surface modifications are employed to enhance this concept. For example, plasma surface treatments enhance biocompatibility and improve surface wettability compared to earlier silicon elastomer lenses. Future innovations There are various future expansions related to this material and this application. Currently, researchers are exploring ways to create customised and personalised lenses tailored to an individual’s unique eye shape, prescription, and lifestyle. One of the ways they are aiming to do this is by using 3D printing and digital scanning to allow for precise fitting. Although this is feasible, there are some challenges relating to scalability and cost-effectiveness while ensuring quality. Moreover, another possible expansion is smart contact lenses which aim to go beyond just improving the user's vision. For example, smart lenses are currently being developed for glucose and intraocular pressure monitoring to benefit patients with diseases including diabetes and glaucoma respectively. The challenges associated with this idea are data transfer, oxygen permeability and therefore comfort. (See Figure 2 ). Conclusion In conclusion, silicon hydrogel lenses represent a remarkable fusion of material science and engineering. Their positive impact on eye health, comfort, and vision correction continues to evolve. As research progresses, we can look forward to even more innovative solutions benefiting visually-impaired individuals worldwide. Written by Roshan Gill Related articles: Semi-conductor manufacturing / Room-temperature superconductor REFERENCES Optical Society of India, Journal of Optics, Volume 53, Issue 1, Springer, 2024 February Lamb J, Bowden T. The history of contact lenses. Contact lenses. 2019 Jan 1:2-17. Ţălu Ş, Ţălu M, Giovanzana S, Shah RD. A brief history of contact lenses. Human and Veterinary Medicine. 2011 Jun 1;3(1):33-7. Brennan NA. Beyond flux: total corneal oxygen consumption as an index of corneal oxygenation during contact lens wear. Optometry and vision science. 2005 Jun 1;82(6):467-72. Dumbleton K, Woods C, Jones L, Fonn D, Sarwer DB. Patient and practitioner compliance with silicon hydrogel and daily disposable lens replacement in the United States. Eye & Contact Lens. 2009 Jul 1;35(4):164-71. Nichols JJ, Sinnott LT. Tear film, contact lens, and patient-related factors associated with contact lens–related dry eye. Investigative ophthalmology & visual science. 2006 Apr 1;47(4):1319-28. Jacinto S. Rubido, Ocular response to silicone-hydrogel contact lenses, 2004. Musgrave CS, Fang F. Contact lens materials: a materials science perspective. Materials. 2019 Jan 14;12(2):261. Shaker LM, Al-Amiery A, Takriff MS, Wan Isahak WN, Mahdi AS, Al-Azzawi WK. The future of vision: a review of electronic contact lenses technology. ACS Photonics. 2023 Jun 12;10(6):1671-86. Kim J, Cha E, Park JU. Recent advances in smart contact lenses. Advanced Materials Technologies. 2020 Jan;5(1):1900728. Project Gallery
- The Genetics of Ageing and Longevity | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Genetics of Ageing and Longevity 13/12/24, 12:04 A well-studied longevity gene is SIRT1 Ageing is a natural process inherent to all living organisms. Yet, its mechanisms remain somewhat enigmatic. While lifestyle factors undoubtedly influence longevity, recent advancements in genetic research have revealed the influence of our genomes on ageing. Through understanding these influences, we can unlock further knowledge on longevity, which can aid us in developing interventions to promote healthy ageing. This article delves into the world of ageing and longevity genetics and how we can use this understanding to our benefit. Longevity genes A number of longevity genes, such as APOE , FOXO3 , and CETP, have been identified. These genes influence various biological processes, including cellular repair, metabolism, and stress response mechanisms. A well-studied longevity gene is SIRT1 . Located on chromosome 10, SIRT1 encodes sirtuin 1, a histone deacetylase, transcription factor, and cofactor. Its roles include protecting cells against oxidative stress, regulating glucose and lipid metabolism, and promoting DNA repair and stability via deacetylation. Sirtuins are an evolutionarily conserved mediator of longevity in many organisms. One study looked at mice with knocked-out SIRT1 ; these mice had significantly lower lifespans when compared with WT mice1. The protective effects of SIRT1 are thought to be due to deacetylating p53, which promotes cell death2. SIRT1 also stimulates the cytoprotective and stress-resistance gene activator FoxO1A (see Figure 1 ), which upregulates catalase activity to prevent oxidative stress3. Genome-wide association studies (GWAS) have identified several genetic variants associated with ageing and age-related diseases. Such variants influence diverse aspects of ageing, such as cellular senescence, inflammation, and mitochondrial function. For example, certain polymorphisms in APOE are associated with an increased risk of age-related conditions like Alzheimer's and Parkinson’s disease4. These genes have a cumulative effect on the longevity of an organism. Epigenetics of ageing Epigenetic modifications, such as histone modifications and chromatin remodelling, regulate gene expression patterns without altering the DNA sequence. Studies have shown that epigenetic alterations accumulate with age and contribute to age-related changes in gene expression and cellular function. For example, DNA methylation is downregulated in human fibroblasts during ageing. Furthermore, ageing correlates with decreased nucleosome occupancy in human fibroblasts, thereby increasing the expression of genes unoccupied by nucleosomes. One specific marker of ageing in metazoans is H3K4me3, indicating the trimethylation of lysine 4 on histone 3; in fact, H3K4me3 demethylation extends lifespan. Similarly, H3K27me3 is also a marker of biological age. By using these markers as an epigenetic clock, we can predict biological age using molecular genetic techniques. As a rule of thumb, genome-wide hypomethylation and CpG island hypermethylation correlate with ageing, although this effect is tissue-specific5. Telomeres are regions of repetitive DNA at the terminal ends of linear chromosomes. Telomeres become shorter every time a cell divides (see Figure 2 ), and eventually, this can hinder their function of protecting the ends of chromosomes. As a result, cells have complex mechanisms in place to prevent telomere degradation. One of these is the enzyme telomerase, which maintains telomere length by adding G-rich DNA sequences. Another mechanism is the shelterin complex, which binds to ‘TTAGGG’ telomeric repeats to prevent degradation. Two major components of the shelterin complex are TRF1 and TRF2, which bind telomeric DNA. They are regulated by the chromatin remodelling enzyme BRM-SWI/SNF, which has been shown to be crucial in promoting genomic stability, preventing cell apoptosis, and maintaining telomeric integrity. BRM-SWI/SNF regulates TRF1/2, thereby, regulating the shelterin complex, by remodelling the TRF1/2 promoter region to convert it to euchromatin and increase transcription. BRM-SWI/SNF inactivating mutations have been shown to contribute to cancer and cellular ageing through telomere degradation6. Together, the mechanisms cells have in place to protect telomeres provide protection against cancer as well as cellular ageing. Future of anti-ageing drugs Anti-ageing drugs are big business in the biotechnology and cosmetics sector. For example, senolytics are compounds that decrease the number of senescent cells in an individual. Senescent cells are those that have permanently exited the cell cycle and now secrete pro-inflammatory molecules (see Figure 3); they are a major cause of cellular and organismal ageing. Senolytic drugs aim to provide anti-ageing benefits to an individual, whereby senescent cells are removed, therefore, decreasing inflammation. Currently, researchers are certain that removing senescent cells would have an anti-ageing effect, although senolytic drugs currently on the market are understudied, and so their side effects are unknown. Speculative drugs could include those that enhance telomerase or SIRT1 activity. Evidently, ageing is not purely determined by lifestyle and environmental factors alone but also by genetics. While longevity genes are hereditary, epigenetic modifications may be influenced by external factors. Therefore, we can attribute the complex interplay between various external factors and an individual’s genome to understanding the role of genetics in ageing. Perhaps we will see a new wave of anti-ageing treatments in the coming years, developed on the genetics of ageing. Written by Malintha Hewa Batage Related articles: An introduction to epigenetics / Schizophrenia, inflammation and ageing REFERENCES Cilic, U et al., (2015) ‘A Remarkable Age-Related Increase in SIRT1 Protein Expression against Oxidative Stress in Elderly: SIRT1 Gene Variants and Longevity in Human’, PLoS One , 10(3). Alcendor, R et al., (2004) ‘Silent information regulator 2alpha, a longevity factor and class III histone deacetylase, is an essential endogenous apoptosis inhibitor in cardiac myocytes’, Circulation Research , 95(10):971-80. Alcendor, R et al., (2007) ‘Sirt1 regulates aging and resistance to oxidative stress in the heart’, Circulation Research , 100(10):1512-21. Yin, Y & Wang, Z, (2018) ‘ApoE and Neurodegenerative Diseases in Aging’, Advances in Experimental Medicine and Biology , 1086:77-92. Wang, K et al., (2022) ‘Epigenetic regulation of aging: implications for interventions of aging and diseases’, Signal Transduction and Targeted Therapy , 7(1):374. Images made using BioRender. Project Gallery
- The future of semiconductor manufacturing | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The future of semiconductor manufacturing 13/12/24, 12:10 Through photonic integration Recently the researchers from the University of Sydney developed a compact photonic semiconductor chip by heterogeneous material integration methods which integrates an active electro-optic (E-O) modulator and photodetectors in a single chip. The chip functions as a photonic circuit (PIC) offering a 15 gigahertz of tunable frequencies with a spectral resolution of only 37 MHz and is able to expand the radio frequency bandwidth (RF) to precisely control the information flowing within the chip with the help of advanced photonic filter controls. The application of this technology extends to various fields: • Advanced Radar: The chip's expanded radio-frequency bandwidth could significantly enhance the precision and capabilities of radar systems. • Satellite Systems: Improved radio-frequency performance would contribute to more efficient communication and data transmission in satellite systems. • Wireless Networks: The chip has the potential to advance the speed and efficiency of wireless communication networks. • 6G and 7G Telecommunications: This technology may play a crucial role in the development of future generations of telecommunications networks. Microwave Photonics (MWP) is a field that combines microwave and optical technologies to provide enhanced functionalities and capabilities. It involves the generation, processing, and distribution of microwave signals using photonic techniques. An MWP filter is a component used in microwave photonics systems to selectively filter or manipulate certain microwave frequencies using photonic methods (see Figure 1 ). These filters leverage the unique properties of light and its interaction with different materials to achieve filtering effects in the microwave domain. They can be crucial in applications where precise control and manipulation of microwave signals are required. MWP filters can take various forms, including fiber-based filters, photonic crystal filters and integrated optical filters. These filters are designed to perform functions such as wavelength filtering, frequency selection and signal conditioning in the microwave frequency range. They play a key role in improving the performance and efficiency of microwave photonics systems. The MWP filter operates through a sophisticated integration of optical and microwave technologies as depicted in the diagram. Beginning with a laser as the optical carrier, the photonic signal is then directed to a modulator where it interacts with an input Radio-Frequency (RF) signal. The modulator dynamically influences the optical carrier's intensity, phase or frequency based on the RF input. Subsequently, the modulated signal undergoes processing to shape its spectral characteristics in a manner dictated by a dedicated processor. This shaping is pivotal for achieving the desired filtering effect. The processed optical signal is then fed into a photodiode for conversion back into an electrical signal. This conversion is based on the variations induced by the modulator on the optical carrier. The final output which is represented by the electrical signal reflects the filtered and manipulated RF signal which demonstrates the MWP's ability in leveraging both optical and microwave domains for precise and high-performance signal processing applications. Extensive research has been conducted in the field of MWP chips, as evidenced by a thorough examination in Table 1 . This table compares recent studies based on chip material type, filter type, on-chip component integration, and working bandwidth. Notably, previous studies demonstrated noteworthy advancements in chip research despite the dependence on external components. What distinguishes the new chip is its revolutionary integration of all components into a singular chip which is a significant breakthrough that sets it apart from previous attempts in the field. Here the term "On-chip E-O" involve the integration of electro-optical components directly onto a semiconductor chip or substrate. This integration facilitates the interaction between electrical signals (electronic) and optical signals (light) within the same chip. The purpose is to enable the manipulation, modulation or processing of optical signals using electrical signals typically in the form of voltage or current control. Key components of on-chip electro-optical capabilities include: 1. Modulators which alter the characteristics of an optical signal in response to electrical input which is crucial for encoding information onto optical signals. 2. Photonic detectors convert optical signals back into electrical signals extracting information for electronic processing. 3. Waveguides guide and manipulate the propagation of light waves within the chip, routing optical signals to various components. 4. Switches routes or redirects the optical signals based on electrical control signals. This integration enhances compactness, energy efficiency, and performance in applications such as communication systems and optical signal processing. "FSR-free operation" refers to Free Spectral Range (FSR) which is a characteristic of optical filters and resonators. FSR is the separation in frequency between two consecutive resonant frequencies or transmission peaks. The column "FSR-free operation" indicates whether the optical processing platform operates without relying on a specific or fixed Free Spectral Range. It means that its operation is not bound or dependent on a particular FSR. This could be advantageous in scenarios where flexibility in the spectral range or the ability to operate over a range of frequencies without being constrained by a specific FSR is desired. "On-chip MWP link improvement" refers to enhancements made directly on a semiconductor chip to optimize the performance of MWP links. These improvements aim to enhance the integration and efficiency of communication or signal processing links that involve both microwave and optical signals. The term implies advancements in key aspects such as data transfer rates, signal fidelity and overall link performance. On-chip integration brings advantages such as compactness and reduced power consumption. The manufacturing of the photonic integrated circuit (PIC) involves partnering with semiconductor foundries overseas to produce the foundational chip wafer. This new chip technology will play a crucial role in advancing independent manufacturing capabilities. Embracing this type of chip architecture enables a nation to nurture the growth of its autonomous chip manufacturing sector by mitigating reliance on international foundries. The extensive chip delays witnessed during the 2020 COVID pandemic underscored the global realization of the chip market's significance and its potential impact on electronic manufacturing. Written by Arun Sreeraj Related articles: Advancements in semi-conductor technology / The search for a room-temperature superconductor / Silicon hydrogel lenses Project Gallery
- The search for a room-temperature superconductor | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The search for a room-temperature superconductor 13/12/24, 12:09 A (possibly) new class of semiconductors In early August, the scientific community was buzzing with excitement over the groundbreaking discovery of the first room-temperature superconductor. As some rushed to prove the existence of superconductivity in the material known as LK-99, others were sceptical of the validity of the claims. After weeks of investigation, experts have concluded that LK-99 was likely not the elusive room-temperature superconductor but rather a different type of magnetic material with interesting properties. But what if we did stumble upon a room-temperature superconductor? What could this mean for the future of technology? Superconductivity is a property of some materials at extremely low temperatures that allows the material to conduct electricity with no resistance. Classical physics cannot explain this phenomenon, and instead, we have to turn to quantum mechanics to provide a description of superconductors. Inside superconductors, electrons are paired up and can move through the structure of the material without experiencing any friction. The pairs of electrons are broken up by the thermal energy from temperature, so they will only exist for low temperatures. Therefore, this theory, known as BCS theory after the physicists who formulated it, does not explain the existence of a high-temperature superconductor. To describe high-temperature superconductors, such as those occurring at room temperature, more complicated theories are needed. The magic of superconductors lies in their property of zero resistance. Resistance is a cause of energy waste in circuits due to heating, which leads to the unwanted loss of power, making for inefficient operation. Physically, resistance is caused by electrons colliding with atoms in the structure of a material, causing energy to be lost in the process. The ability for electrons to move through superconductors without experiencing any collisions results in no resistance. Superconductors are useful as components in circuits as they cause no wasted power due to heating effects and are completely energy-efficient in this aspect. Normally, using superconductors requires complex methods of cooling them down to typical superconducting temperatures. For example, the temperature at which copper becomes superconducting is 35 K, or in other words, around 130 °C colder than the temperature at which water freezes. These methods are expensive to implement, which prevents them from being implemented on a wide scale. However, having a room-temperature superconductor would allow access to the beneficial properties of the material, such as its resistance, without the need for extreme cooling. The current record holders for highest-temperature superconductors are the cuprate superconductors at around −135 °C. These are a family of materials made up of layers of copper oxides alternating with layers of other metal oxides. As the mechanism for superconductivity is yet to be revealed, scientists are still scratching their heads over how this material can exhibit superconducting properties. Once this mechanism is discovered, it may be easier to predict and find high-temperature superconducting materials and may lead to the first room-temperature superconductor. Until then, the search continues to unlock the next frontier in low-temperature physics… For more information on superconductors: [1] Theory behind superconductivity [2] Video demonstration Written by Madeleine Hales Related articles: Semiconductor manufacturing / Semiconductor laser technology / Silicon hydrogel lenses Project Gallery
- Nanomedicine | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Nanomedicine 13/12/24, 12:07 Tiny solutions for big health problems As the landscape of the healthcare field expands, new advances are coming forth, and one such area of interest is nanomedicine. Existing on a miniature scale called nanometres, nanomedicine and technology provide a revolutionary solution to many modern-day problems faced by the scientific community. Through this article, we’ll aim to explore what exactly nanomedicine is, its importance, its use in medicine, as well as its limitations and future prospects. The nanoscale When mentioning nanomedicine or nanotechnology, we refer to materials and particles existing on the nanoscale. This lies between 1-100 nanometres. For reference, human hair is 80,000-100,000 nanometres wide, so comparatively, the technology is much smaller. Although the technology may seem small, its impact is far too significant to be discredited. Due to their smaller size, the nanoparticles hold several advantages, making them useful in biomedicine, these include providing greater surface area for molecular interactions in the body, and they are much easier to manipulate, allowing for greater control and precision in terms of diagnostics and medicine delivery (Figure 1). Cancer drug delivery systems Nanotechnology in the field of medicine is being widely used and tested with regards to its application as a drug delivery system. More recently, it’s being investigated for its increased precision in delivering anti-cancer drugs to patients. Nanotechnology enables precise drug delivery through the construction of nanoscale infrastructures called nanoparticles. These can be filled with anti-cancer drug treatments, and their outer structure can be further designed to include elements which target folate receptors, such as folic acid (B9 vitamin), thus increasing their affinity for specific receptors in the body. Folate receptors tend to be overexpressed on the surface of many cancers, including pancreas, breast, and lung. So, by increasing selectivity and targeting only the cells which overexpress these receptors, the nanoparticles can deliver chemotherapy drugs with increased precision. This increased accuracy results in decreased cellular toxicity to surrounding non-cancerous tissues whilst also reducing side effects. In current experiments, lipid nanoparticles loaded with the anti-cancer drug edelfosine were tested on mice with mantle cell cancer. Lipid nanoparticles offer several advantages as a drug delivery system, including biocompatibility, greater physical stability, increased tolerability, and controlled release of the encapsulated drug. Lipid nanoparticles are also advantageous for their ability to be size specific to a tumour. In the study, in vivo experimentation using mice that contained mantle cell lymphoma was used, and they were administered 30mg/kg of the encapsulated drug. After administering the edelfosine loaded nanoparticles every 4 days, it was found that the process of metastasis had been removed; this means that cancer cells could not spread to other parts of the body. Additionally, it was also found that because of the way the nanoparticles were absorbed into the lymphatic system, they could accumulate in the thoracic duct providing precise and slow release of the drug over time, thus preventing metastasis (Figure 2). Imaging and diagnostics Another area of use for nanotechnology includes imaging and diagnostics. This area of expertise is regarded as theranostics, which involves using nanoparticles as detectors to help locate the area of the body affected by a disease, such as the location of a tumour, and aid in diagnosing illnesses. With regards to diagnostics, nanoparticles can also help identify what stage of the disease is being observed as well as enable us to garner more information to form a concrete treatment programme for the patient, thus providing a personalised touch to their care. Nanomaterials can be used to engineer different types of nanoparticles, which can enhance contrast on CT and MRI scans so that diseases can be detected more easily by being more visible when compared to traditional scans. In collaboration with Belcher et al., Bardhan worked to collectively develop different formulations of polymers that would be most effective in imagining and detecting cancers earlier. In the figure below, a nanoparticle made of a core shell was used for imaging. It comprises a yellow polymer with a red fluorescent dye to increase imagining contrast of the area and a blue lanthanide nanoparticle. When the lanthanide particles are excited by a light source, fluorescence in the near infrared range (NIR-II) is emitted, allowing for clear contrast and imaging. This can be seen in the figure below. From the colours involved, the tumour being imaged could be investigated more thoroughly in how it was distributed and learn more about its microenvironment in a mouse affected by ovarian cancer (Figure 3). Nanobots In recent times, new investment in the form of nanorobots has been made apparent. Nanorobots are nanoelectromechanical systems whose size is very similar to human organelles and cells, so there are a variety of ways they could be helpful in healthcare, such as in the field of surgery. Traditionally, surgical tools can be limited to work on a small scale. However, with nanorobots, it can be possible to access areas unreachable to surgical tools and catheters whilst also reducing recovery time and infection risk, as well as granting greater control and accuracy over the surgery. In a study conducted by Chen et al. (2020), the researchers manipulated magnetotactic bacterial microrobots to kill a bacteria known as Staphylococcus aureus enabled by magnetic fields to target them. Using a microfluidic chip, the microrobots were guided to the target site and then were programmed to attach themselves to the bacteria. Once connected, the viability of the bacteria was reduced due to the swinging magnetic fields generated by the device. Although this research is promising, further research must be conducted to understand the compatibility of these nanotechnologies with the human body and any implications they may have in side effects (Figure 4). Challenges and safety concerns From the evidence explored above, it is evident that nanotechnology holds much promise in the field of healthcare. However, they are not without their challenges and resignations when introducing their use to human bodies. The human body is incredibly complex, and therefore the complete biocompatibility of nanoparticles, particularly nanobots, is currently under-researched and under reviewed. To extensively use them, it is vital first to understand how safe they are and their efficacy in treatment and diagnosis. Below is a summary of some of the advantages and disadvantages of these nanotechnologies (Figure 5). The future of nanotechnology in biomedicine In conclusion, nanotechnology indicates an extensive and optimistic field at the forefront of changing medical care from diagnosis to treatment. It has the potential to answer many pressing questions in healthcare including decreasing cytotoxicity via a precise drug delivery system, increased accuracy in diagnosis, and possibly becoming a novel tool in surgery. Although it is imperative for there to be new and evolved techniques to increase the quality of care for patients, it is vital not to rush and to be thorough in our approach. This involves undergoing further research, including conducting clinical trials when investigating the use of nanotechnology inside the human body; this will test for tissue compatibility, side effects, efficacy, and even dosage when using nanoparticles for drug delivery. In summary, the transformative role of nanomedicine is undeniable. It offers a path to a more personalised and precise healthcare system, allowing researchers to reshape treatment, diagnosis, and patient well-being, though its limitations are yet to be overcome. Written by Irha Khalid Related articles: Nanoparticles: the future of diabetes treatment? / Semi-conductor manufacturing / Room-temperature superconductor / Silicon hydrogel lenses Project Gallery
- Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype 13/12/24, 12:03 Setting Neuropsychiatry In a Wider Medical Context In novel research by Campeau et al. (2022), the proteomic analysis of 742 proteins from the blood plasma of 54 schizophrenic participants and 51 age-matched healthy volunteers. This investigation resulted in the validation of the previously-contentious link between premature aging and schizophrenia by testing for a wide variation of proteins involved in cognitive decline, aging-related comorbidities, and biomarkers of earlier-than-average mortality. The results from this research demonstrated that age-linked changes in protein abundance occur earlier on in life in people with schizophrenia. This data also helps to explain the heightened incidence rate of age-related disorders and early all-cause death in schizophrenic people too, with protein imbalances associated with both phenomena being present in all schizophrenic age strata over age 20. This research is the result of years of medical intrigue regarding the biomedical underpinnings of schizophrenia. The comorbidities and earlier death associated with schizophrenia were focal points of research for many years, but only now have valid explanations been posed to answer the question of the presence of such phenomena. The explanation for the greater incidence rate of early death in schizophrenia was described in this study as the increased volume of certain proteins. Specifically, these included biomarkers of heart disease (Cystatin-3, Vitronectin), blood clotting abnormalities (Fibrinogen-B) and an inflammatory marker (L-Plastin). These proteins were tested for due to their inclusion in a dataset of protein biomarkers of early all-cause mortality in healthy and mentally-ill people published by Ho et al. (2018) for the Journal of the American Heart Association. Furthermore, a protein linked to degenerative cognitive deficit with age, Cystatin C, was present in increased volume in schizophrenic participants both under and over the age of 40. This explains why antipsychotics have limited effectiveness in reducing the cognitive effects of schizophrenia. In this study, schizophrenics under 40 had similar plasma protein content as the healthy over-60 strata set, including both biomarkers of cognitive decline, age-related diseases and death. Schizophrenics under-40 showed the same likelihood for incidence of the latter phenomena compared to the healthy over-60 set. These results could demonstrate the necessity for use of medications often used to treat age-related cognitive decline and mortality-linked protein abundances to treat schizophrenia. One of these options include polyethylene glycol-Cp40, a C3 inhibitor used to treat nocturnal haemoglobinuria, which could be used to ameliorate the risk of developing age-related comorbidities in schizophrenic patients. This treatment may be effective in the reduction of C3 activation, which would reduce the opsonisation (tagging of detected foreign products in blood). When overexpressed, C3 can cause the opsonisation of healthy blood cells in a process called haemolysis, which can catalyse the reduction of blood volume implicated in cardiac events and other comorbidities. However, whether or not this treatment would benefit those with schizophrenia is yet to be proven. The potential of this research to catalyse new treatment options for schizophrenia cannot be understated. Since the publication of Kilbourne et al. in 2009, the impact of cardiac comorbidities in catalysing early death in schizophrenic patients has been accepted medical dogma. The discovery of exact protein targets to reduce the incidence rate of age-linked conditions and early death in schizophrenia will allow the condition to be treated more holistically, with greater observance to the fact that schizophrenia is not only a psychiatric illness, but also a neurocognitive disorder with affiliated comorbidities that have to be prevented adequately. Written by Aimee Wilson Related article: Genetics of ageing and longevity Project Gallery
- Advances in mass spectrometry technology | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Advances in mass spectrometry technology 13/12/24, 11:41 Pushing the boundaries of analytical chemistry In the rapidly evolving field of analytical chemistry, recent technological innovations in mass spectrometry have revolutionised the analysis and characterisation of molecules. These advancements, including high-resolution mass analysers, ion mobility spectrometry (IMS), and ambient ionisation techniques, are pushing the boundaries of what can be achieved in chemical analysis. Mass spectrometry is a powerful analytical technique that provides qualitative and quantitative information on an analyte. It is useful for measuring the mass-to-charge ratio (m/z) of one or more molecules present in a sample. The process consists of: Inlet - Allows the analyte to be connected to the mass spectrometre (MS). Could be direct inlet or gas chromotography (GC) / liquid chromatography (LC) to allow some separation before MS Ion source - Ensures that the analyte is ionised (i.e. carries a net charge) there are various types of ion sources depending on the analyte Analysers - Brings about a change in the velocity/trajectory of an ion from which the ions m/z can be determined i.e. characterises rate/velocity of ion. Multiple analysers are in tandem and different analysers can be combined to allow greater scope for analysis. A detection system is also required to amplify and measure ion signals. Analysers and detectors need to be held under low pressure - near vacuum. Detector - collects charge signals from ion beams. The computer then detects a spectrum. The electronic signals from the ions are then digitised to produce a mass spectrum of the analyte. High-resolution mass analysers One of the most significant breakthroughs in mass spectrometry is the development of high-resolution mass analysers. These instruments can differentiate between ions with extremely close mass-to-charge ratios, providing unprecedented levels of accuracy and specificity in compound identification. High-resolution mass spectrometry enables scientists to resolve complex mixtures and detect trace components with exceptional sensitivity, making it invaluable in fields such as metabolomics, environmental analysis, and drug discovery. Ion Mobility Spectrometry (IMS) Ion mobility spectrometry is another cutting-edge technology that enhances the capabilities of mass spectrometry. IMS separates ions based on their size, shape, and charge in the gas phase, providing an additional dimension of separation before mass analysis. This technique improves the resolution of complex samples, particularly for isomeric compounds that are challenging to distinguish using conventional methods. IMS coupled with mass spectrometry is widely applied in metabolomics, proteomics, and lipidomics research, enabling deeper insights into molecular structures and interactions. Ambient ionisation techniques Traditional mass spectrometry methods often require extensive sample preparation and ionisation processes in controlled laboratory environments. Ambient ionisation techniques have transformed this paradigm by enabling direct analysis of samples in their native states, including solids, liquids, and gases, without prior extraction or purification steps. Techniques such as desorption electrospray ionisation (DESI) and direct analysis in real-time (DART) have expanded the scope of mass spectrometry applications to fields like clinical diagnostics, food safety, and forensic analysis. Ambient ionisation allows for rapid, on-site measurements with minimal sample handling, revolutionising point-of-care testing and field analysis. In conclusion, the continuous evolution of mass spectrometry technology is reshaping the landscape of analytical chemistry. These innovations not only empower researchers to explore new realms of chemical analysis but also facilitate applications in areas such as precision medicine, environmental monitoring, and materials science. As these technologies continue to advance, the future holds even greater promise for pushing the boundaries of analytical chemistry and unlocking the mysteries of the molecular world. Written by Anam Ahmed Related article: Advancements in semi-conductor manufacturing Project Gallery
- Artificial intelligence: the good, the bad, and the future | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Artificial intelligence: the good, the bad, and the future 13/12/24, 11:34 A Scientia News Biology collaboration Introduction Artificial intelligence (AI) shows great promise in education and research, providing flexibility, curriculum improvements, and knowledge gains for students. However, concerns remain about its impact on critical thinking and long-term learning. For researchers, AI accelerates data processing but may reduce originality and replace human roles. This article explores the debates around AI in academia, underscoring the need for guidelines to harness its potential while mitigating risks. Benefits of AI for students and researchers Students Within education, AI has created a buzz for its usefulness in aiding students to complete daily and complex tasks. Specifically, students have utilised this technology to enhance their decision making process, improve workflow and have a more personalised learning experience. A study by Krive et al. (2023) demonstrated this by having medical students take an elective module to learn about using AI to enhance their learning and understand its benefits in healthcare. Traditionally, medical studies have been inflexible, with difficulty integrating pre-clinical theory and clinical application. The module created by Krive et al. introduced a curriculum with assignments featuring online clinical simulations to apply preclinical theory to patient safety. Students scored a 97% average on knowledge exams and 89% on practical exams, showing AI's benefits for flexible, efficient learning. Thus, AI is able to assist in enhancing student learning experiences whilst saving time and providing flexibility. Additionally, we gathered testimonials from current STEM graduates and students to better understand the implications of AI. In Figure 1 , we can see that the students use AI to benefit their exam learning, get to grips with difficult topics, and summarise long texts to save time whilst exercising caution, knowing that AI has limitations. This shows that AI has the potential to become a personalised learning assistant to improve comprehension and retention and organise thoughts, all of which allow students to enhance skills through support as opposed to reliance on the software. Despite the mainstream uptake of AI, one student has chosen not to use AI in the worry of becoming less self-sufficient, and we will explore this dynamic in the next section. Researchers AI can be very useful for academic researchers, such as making the process of writing and editing papers based on new scientific discoveries less slow or even facilitating it altogether. As a result, society may have innovative ways to treat diseases and increase the current knowledge of different academic disciplines. Also, AI can be used for data analysis by interpreting a lot of information, and this not only saves time but a lot of money required to complete this process accurately. The statistics and graphical findings could be used to influence public policy or help different businesses achieve their objectives. Another quality of AI is that it can be tailored towards the researcher's needs in any field, from STEM to subject areas outside of it, indicating that AI’s utilities are endless. For academic fields requiring researchers to look at things in greater detail, like molecular biology or immunology, AI can help generate models to understand the molecules and cells involved in such mechanisms sufficiently. This can be through genome analysis and possibly next generation sequencing. Within education, researchers working as lecturers can utilise AI to deliver concepts and ideas to students and even make the marking process more robust. In turn, this can decrease the burnout educators experience in their daily working lives and may possibly help establish a work-life balance, as a way to feel more at ease over the long-term. Risks of AI for students and researchers Students With great power comes great responsibility, and with the advent of AI in school and learning, there is increasing concern on the quality of learners produced from schools, and if their attitude to learning and critical thinking skills are hindered or lacking. This matter has been echoed in results from a study conducted by Ahmad et al. (2023), which studied how AI affects laziness and distorts decision making in university students. The results showed using AI in education correlated with 68.9% of laziness and a 27.7% loss in decision making abilities in 285 students across Pakistani and Chinese institutes. This confirms some worries that a former testimonial shared with us in figure 1 and suggests that students may become more passive learners rather than develop key life skills. This may even lead to reluctance to learn new things and seeking out ‘the easy way’ rather than enjoy obtaining new facts. Researchers Although AI can be great for researchers, it carries its own disadvantages. For example, it could lead to reduced originality while writing, and this type of misconduct jeopardises the reputation of the people working in research. Also, the software is only as effective as the type of data they are specialised in, so specific AI could misinterpret the data. This has downstream consequences that can affect how research institutions are run, and beyond that, scientific inquiry is hindered. Therefore, if severely misused, AI can undermine the integrity of academic research, which could hinder the discovery of life-saving therapies. Furthermore, there is the potential for AI to replace researchers, suggesting that there may be fewer opportunities to employ aspiring scientists. When given insufficient information, AI can be biased, which can be detrimental; an article found that its use in a dermatology clinic can put certain patients at risk of skin cancer and suggested that it receives more diverse demographic data for AI to work effectively. Thus, it needs to be applicable in a strategic way to ensure it works as intended and does not cause harm. Conclusion Considering the uses of AI for students and researchers, it is advantageous to them by supporting any knowledge gaps, aiding in data analysis, boosting general productivity and can be used to engage with the public and much more. Its possibilities for enhancing industries such as education and drug development are endless for propagating societal progression. Nevertheless, the drawbacks of AI cannot be ignored, like the chance of it replacing people in jobs or that it is not completely accurate. Therefore, guidelines must be defined for its use as a tool to ensure a healthy relationship between AI and students and researchers. According to the European Network of Academic Integrity (ENAI), using AI for proofreading, spell checking, and as a thesaurus is admissible. However, it should not be listed as a co-author because, compared to people, it is not liable for any reported findings. As such, depending on how AI is used, it can be a tool to help society or be detrimental, so it is not inherently good or bad for students, researchers and society in general. Written by Sam Jarada and Irha Khalid Introduction, and 'Student' arguments by Irha Conclusion, and 'Researcher' arguments by Sam Related articles: Evolution of AI / AI in agriculture and rural farming / Can a human brain be uploaded to a computer? Project Gallery
- Advancements in Semiconductor Laser Technology | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Advancements in Semiconductor Laser Technology 13/12/24, 11:42 What they are, uses, and future outlook Lasers have revolutionized many fields starting from the telecommunications, data storage to medical diagnostics and consumer electronics. And among the semiconductor laser technologies, Edge Emitting Lasers (EEL) and Vertical Cavity Surface Emitting Lasers (VCSEL) emerged as critical components due to their unique properties and performance. These lasers generate light through the recombination of electrons and holes in a semiconductor material. EELs are known for their high power and efficiency and they are extensively used in fiber optic communications and laser printing. VCSELs on the other hand are compact and are used for applications like 3D sensing. Traditionally VCSELs have struggled to match the efficiency levels of EELs however a recent breakthrough particularly in multi junction VCSEL, has demonstrated remarkable efficiency improvements which place the VCSELs to surpass EELs in various applications. This article focuses on the basics of these laser technologies and their recent advancements. EELs are a type of laser where light is emitted from the edge of the semiconductor wafer. This design contrasts with the VCSELs which emit light perpendicular to the wafer surface. EELs are known for their high power output and efficiency which makes them particularly suitable for applications that require long-distance light transmission such as fiber optic communications, laser printing and industrial machining. EELs consist of an active region where electron hole recombination occurs to produce light. This region is sandwiched between two mirrors forming a resonant optical cavity. The emitted light travels parallel to the plane of the semiconductor layers and exits from the edge of the device. This design allows EELs to achieve high gain and power output which makes them effective for transmitting light over long distances with minimal loss. VCSELs is a type of semiconductor laser that emits light perpendicular to the surface of the semiconductor wafer unlike the EELs which emit light from the edge. VCSELs have gained popularity due to their lower threshold currents and ability to form high density arrays. VCSELs consist of an active region where electron-hole recombination occurs to produce light. This region is situated between two highly reflective mirrors which forms a vertical resonant optical cavity. The light is emitted perpendicular to the wafer surface which allows for efficient vertical emission and easy integration into arrays. Recent advancements in VCSEL technology marked a significant milestones in the field of semiconductor lasers. And in particular the development of multi junction VCSEL which led to the improvements in power conversion efficiency (PCE) of the laser. Research conducted by Yao Xiao et al. and team has demonstrated the potential of a multi junction VCSELs to achieve efficiency levels which were previously thought unattainable. This research focuses on cascading multiple active regions within a single VCSEL to enhance gain and reduce threshold current which leads to higher overall efficiency. The study employed a multi-junction design where several active regions are stacked vertically within the VCSEL. This design increases the volume of the gain region and lowers the threshold current density resulting in higher efficiency. Experimental results from the study revealed that a 15-junction VCSEL achieved a PCE of 74% at room temperature when driven by nanosecond pulses. This efficiency is the highest ever reported for VCSELs and represents a significant leap forward from previous records. Simulations conducted as part of the study indicated that a 20-junction VCSEL could potentially reach a PCE exceeding 88% at room temperature. This suggests that further optimization and refinement of the multi-junction approach could yield even greater efficiencies. The implications of this research are profound for the future of VCSEL technology. Achieving such high efficiencies places VCSELs as strong competitors to EELs particularly in applications where energy efficiency and power density are critical. The multi junction VCSELs demonstrated in the study shows promise for a wide range of applications and future works may focus on optimizing the fabrication process, reducing thermal management issues and exploring new materials to further enhance performance. Integrating these high-efficiency VCSELs into commercial products could revolutionize industries reliant on laser technology. Written by Arun Sreeraj Related articles: The future of semi-conductor manufacturing / The search for a room-temperature superconductor / Advances in mass spectrometry Project Gallery