Search Index
258 items found
- Anticancer Metal Compounds | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Anticancer Metal Compounds How metal compounds can be used as anti-cancer agents Metal compounds such as Platinum, Cobalt and Ruthenium are used as anticancer agents. Anticancer metal compound research is important as chemotherapy is not selective, being very toxic to patients damaging normal DNA cells. Such metal compounds act as anti-cancer agents with the metals being able to vary in oxidation states. Selectivity of metal compounds to target only cancer cells arises from the metals properties of varying oxidation states for redox reactions. As cancer exists in hypoxic environments, the oxidation state of the metal is able to vary releasing the cancer drug only in the cancer environment. For example prodrugs are relatively inert metal complexes with relatively high oxidation states. PtIV, and CoIII are selective carriers undergoing reduction by varying the metals oxidation state in cancerous hypoxic environments releasing anticancer drugs. CoIII reduced to CoII, PtIV reduced to PtII in hypoxic environments. CoIII two oxidation states: Cobalt (III) is kinetically inert with low-spin 3d6 configuration, CoII is labile (high-spin 3d7). When CoIII is reduced to CoII in hypoxic environments, the active molecule is released then restored to its active form killing cancer cells. Cobalt can also bind to ligands like nitrogen mustards and curcumin ligands, exhibiting redox reactivity for cancer therapy. Nitrogen mustards are highly toxic due to their DNA alkylation and cross-linking activity. In vivo they are not selective for tumour tissue however can be deactivated by coordination to CoIII, released on reduction to CoII in hypoxic tumour tissue. This reduces systemic toxicity concluding an efficient anticancer drug. Platinum anticancer metal compounds treat ovarian, cervical and neck cancer. Platinum ( Pt IV) (cisplatin) exhibits redox-mediated anticancer activity, highly effective towards tumours. Platinum causes severe side-effects for patients so PtIV prodrug is used selectively reducing tumour sites. Ruthenium is used for cancer therapy as a less toxic metal over platinum. Ruthenium targeted therapy selectively disrupts specific cellular pathways where cancer cells rely for growth and metastasis. Reduction of Ru (III) to Ru(II) selectively occurs in hypoxic reducing environments where tumours over express transferrin receptors, ruthenium binding to. Overall metal compounds for cancer treatment attracted high interest due to redox activity properties. Metal compounds are selective to cancer cells, limiting patients' side effects. Such therapy shows how inorganic chemistry is important to medicine. By Alice Davey Related article: MOFs in cancer drug delivery Project Gallery
- A comprehensive guide to the Relative Strength Index (RSI) | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A comprehensive guide to the Relative Strength Index (RSI) The maths behind trading In this piece, we will delve into the essential concepts surrounding the Relative Strength Index (RSI). The RSI serves as a gauge for assessing the strength of price momentum and offers insights into whether a particular stock is in an overbought or oversold condition. Throughout this exploration, we will demystify the underlying calculations of RSI, explore its significance in evaluating market momentum, and unveil its practical applications for traders. From discerning opportune moments to buy or sell based on RSI values to identifying potential shifts in market trends, we will unravel the mathematical intricacies that underpin this critical trading indicator. Please note that none of the below content should be used as financial advice, but for educational purposes only. This article does not recommend that investors base their decisions on technical analysis alone. As indicated in the name, RSI measures the strength of a stock's momentum and can be used to show when a stock can be considered over- or under-bought, allowing us to make a more informed decision as to whether we should enter a position or hold off until a bit longer. It’s all very well and good to know that ‘you should buy when RSI is under 30 and sell when RSI is over 70' , but in this article, I will attempt to explain why this is the case and what RSI is really measuring. The calculations The relative strength index is an index of the relative strength of momentum in a market. This means that its values range from 0 to 100 and are simply a normalised relative strength. But what is the relative strength of momentum? Initial Average Gain = Sum of gains over the past 14 days / 14 Initial Average Loss = Sum of losses over the past 14 days / 14 Relative strength is the ratio of higher closes to lower closes. Over a fixed period of usually 14 days (but sometimes 21), we measure how much the price of the stock has increased in each trading day and find the mean average between them. We then repeat and do the same to find the average loss. The subsequent average gains and losses can then be calculated: Average Gain = [(Previous Avg. Gain * 13) + Current Day's Gain] / 14 Average Loss = [(Previous Avg. Loss * 13) + Current Day's Loss] / 14 With this, we can now calculate relative strength! Therefore, if our stock gained more than it lost in the past 14 days, then our RS value would be >1. On the other hand, if we lost more than we gained, then our RS value would be <1. Relative strength tells us whether buyers or sellers are in control of the price. If buyers were in control, then the average gain would be greater than the average loss, so the relative strength would be greater than 1. In a bearish market, if this begins to happen, we can say that there is an increase in buyers’ momentum; the momentum is strengthening. We can normalise relative strength into an index using the following equation: Relative Strength=Average Gain / Average Loss Traders then use the RSI in combination with other techniques to assess whether to buy or sell. When a market is ranging, which means that price is bouncing between support and resistance (has the same highs and lows for a period), we can use the RSI to see when we may be entering a trend. When the RSI is reaching 70, it is an indication that the price is being overbought, and in a ranging market, there is likely to be a correction and the price will fall so that the RSI stays at around 50. The opposite is likely to happen when the RSI dips to 30. Price action is deemed to be extreme, and a correction is likely. It should, however, be noted that this type of behaviour is only likely in assets presenting mean-reversion characteristics. In a trending market, RSI can be used to indicate a possible change in momentum. If prices are falling and the RSI reaches a low and then, a few days later, it reaches a higher low (therefore, the low is not as low as the first), it indicates a possible change in momentum; we say there is a bullish divergence. Divergences are rare when a stock is in a long-term trend but is nonetheless a powerful indicator. In conclusion, the relative strength index aims to describe changes in momentum in price action through analysing and comparing previous day's highs and lows. From this, a value is generated, and at the extremes, a change in momentum may take place. RSI is not supposed to be predictive but is very helpful in confirming trends indicated by other techniques. Written by George Chant Project Gallery
- The genesis of life | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The genesis of life Life's origins Did the egg or the chicken come first? This question is often pondered regarding life’s origin and how biological systems came into play. How did chemistry move to biology to support life? And how have we evolved into such complex organisms? The ingredients, conditions and thermodynamically favoured reactions hold the answer, but understanding the inner workings of life’s beginnings poses a challenge for us scientists. Under an empirical approach, how can we address these questions if these events occurred 3.7 billion years ago? The early atmosphere of the Earth To approach these questions, it is relevant to understand the atmospheric contents of the primordial Earth. With a lack of oxygen, the predominant make-up included C02, NH3 and H2, creating a reducing environment for the drive of chemical reactions. When the earth cooled, and the atmosphere underwent condensation, pools of chemicals were made - this is known as “primordial soup”. It is thought that reactants could collide from this “soup” to synthesise nucleotides by forming nitrogenous bases and bonds, such as glycosidic or hydrogen bonds. Such nucleotide monomers were perhaps polymerised to create long chains for nucleic acid synthesis, that is, RNA, via this abiotic synthesis. Thus, if we have nucleic acids, genetic information could have been stored and passed later down the line, allowing for our eventual evolution. Conditions for nucleic acid synthesis The environment supported the formation of monomers for said polymerisation. For example, hydrothermal vents could have provided the reducing power via protons, allowing for the protonation of structures and providing the free energy for bond formation. Biology, of course, relies on protons for the proton gradient in ATP synthesis at the mitochondrial membrane and, in general, acid-base catalysis in enzymatic reactions. Therefore, it is safe to say protons played a vital role in life’s emergence. The eventual formation of structures by protonation and deprotonation provides the enzymatic theory of life’s origins. That is, some self-catalytic ability for replication in a closed system and the evolution of complex biological units. This is the “RNA World” theory, which will be discussed later. Another theory is wet and dry cycling at the edge of hydrothermal pools. This theory Is provided by David Deamer, who suggests that nucleic acid monomers placed in acidic (pH 3) and hot (70-90 degrees Celsius) pools could undergo condensation reactions for ester bond formation. It highlights the need for low water activity and a “kinetic trap” in which the condensation reaction rate exceeds the hydrolysation rate. The heat of the pool provides a high activation energy for the localised generation of polymers without the need for a membrane-like compartment. But even if this was possible and nucleic acids could be synthesised, how could we “keep them safe”? This issue is addressed by the theory of "protocells" formed from fatty acid vesicles. Jack Szostak suggests phase transition (that is pH decrease) allowed for the construction of bilayer membranes from fatty acid monomers, which is homologous to what we see now in modern cells. The fatty acids in these vesicles have the ability to “flip-flop” to allow for the exchange of nutrients or nucleotides in and out of the vesicles. It is suggested that clay encapsulated nucleotide monomers were brought into the protocell by this flip-flop action. Vesicles could grow by competing with surrounding smaller vesicles. Larger vesicles are thought to be those harbouring long polyanionic molecules - that is RNA - which creates immense osmotic pressure pushing outward on the protocell for absorption of smaller vesicles. This represents the Darwinian “survival of the fittest” theory in which cells with more RNA are favoured for survival. The RNA World Hypothesis DNA is often seen as the “Saint” of all things biology, given its ability to store and pass genetic information to mRNA and then mRNA can use this information to synthesise polypeptides. This is the central dogma of course. However, the RNA world hypothesis suggests that RNA arose first due to its ability to form catalytic 3D structures and store genetic information that could have allowed for further synthesis of DNA. This makes sense when you think about how the primer for DNA replication is formed out of RNA. If RNA did not come first, how could DNA replication be possible? Many other scenarios suggest RNA evolution preceded that of DNA. So, if RNA arose as a simple polymer, its ability to form 3D structures could have allowed ribozymes (RNA with enzymatic function) within these protocells. Ribozymes, such as RNA ligase and polymerase, could have allowed for self-replication, and then mutation in primary structure could have allowed evolution to occur. If we have a catalyst, in a closed system, with nutrient exchange, then why would life’s formation not be possible? But how can we show that RNA can arise in this way? The answer to this is SELEX - selective evolution of ligands by exponential enrichment (5). This system was developed by Jack Szostak, who wanted to show the evolution of complex RNA, ribozymes in a test tube was possible. A pool of random, fragmented RNA molecules can be added to a chamber and run through a column with beads. These beads harbour some sequence or attraction to the RNA molecules the column is selecting for. Those that attach can be eluted, and those that do not can be disregarded. The bound RNA can be rerun through SELEX, and the conditions in the column can be more specific in that only the most complementary RNAs bind. This allowed for the development of RNA ligase and RNA polymerase - thus, self-replication of RNA is possible. SELEX helps us understand how the evolution of RNA in the primordial Earth could have been possible. This is also established by meteorites, such as carbon chondrites that burnt up in the earth’s atmosphere encapsulating the organic material in the centre. Chondrites found in Antarctica have been found to contain 80+ amino acids (some of which are not compatible with life). These chondrites also included nucleobases. So, if such monomers can be synthesised in a hostile environment in outer space/in our atmosphere, then the theory of abiotic synthesis is supported. Furthermore, it is relevant to address the abiotic synthesis of amino acids since the evolution of catalytic RNA could have some complementarity for polypeptide synthesis. Miller and Urey (1953) set up a simple experiment containing gas representing the early primordial earth (Methane, hydrogen, ammonia, water). They used a conduction rod to provide the electrical discharge (meant to simulate lightning or volcanic eruption) to the gases and then condensed them. The water in the other chamber turned pink/brown. Following chromatography, they identified amino acids in the mixture. These simple manipulations could have been homologous to early life. Conclusion The abiotic synthesis of nucleotides and amino acids for their later polymerisation would support the theories that address chemistry moving toward biological life. Protocells containing such polymers could have been selected based on their “fitness” and these could have mutated to allow for the evolution of catalytic RNA. The experiments mentioned represent a small fragment of those carried out to answer the questions of life’s origins. The evidence provides a firm ground for the emergence of life to the complexity of what we know today. Written by Holly Kitley Project Gallery
- What you should know about rAAV gene therapy | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link What you should know about rAAV gene therapy Recombinant adeno-associated viruses (rAAVs) Curing a disease with one injection: the dream, the hope, the goal of medicine. Gene therapy brings this vision to reality by harnessing viruses into therapeutic tools. Among them, adeno-associated viruses (AAVs) are the most used: genetically modified AAVs, named recombinant AAVs (rAAVs), are already used in six gene therapies approved for medical use. Over 200 clinical trials are ongoing. AAV, a virus reprogrammed to cure diseases Gene therapy inserts genetic instructions into a patient to correct a mutation responsible for a genetic disorder. Thanks to genetic engineering, researchers have co-opted AAVs (along with adenoviruses, herpes simplex viruses and lentiviruses) into delivering these instructions. Researchers have swapped the genes that allow AAVs to jump from person to person with genes to treat diseases. In other words, the virus has been genetically reprogrammed into a vector for gene transfer. The gene supplemented is referred to as transgene. Biology of AAVs AAVs were discovered in the 1960s as contaminants in cell cultures infected by adenoviruses, a coexistence to which they owe their name. AAVs consist of a protein shell (capsid) wrapped around the viral genome, a single strand of DNA long approximately 4,700 bases (4.7 kb). The genome is capped at both ends by palindromic repetitive sequences folded into T-shaped structures, the Inverted Tandem Repeats (ITRs). Sandwiched between the ITRs, four genes are found. They determine capsid components ( cap ) and capsid assembly ( aap ), genome replication ( rep ) and viral escape from infected cells ( maap ) ( Figure 1, top panel ). The replacement of these four genes with a transgene of therapeutic use and its expression by infected cells (transduction) lie at the heart of gene therapy mediated by rAAVs. Transgene transfer by rAAVs Researchers favour rAAVs as vectors because AAVs are safe (they are not linked to any disease and do not integrate into the genome), they can maintain the production of a therapeutic gene for over ten years and infect a wide range of tissues. In an rAAV, the ITRs are the only viral element preserved. The four viral genes are replaced by a therapeutic transgene, and regulatory sequences to maximise its expression. Therefore, an rAAV contains the coding sequence of the transgene, an upstream promoter to induce transcription and a downstream regulatory sequence (poly-A tail) to confer stability to the mRNA molecules produced ( Figure 1, bottom panel ). Steps of rAAV production Based on the disease, rAAVs can be administered into the blood, an organ, a muscle or the fluid bathing the central nervous system (cerebrospinal fluid). rAAVs dock on target cells via a specific interaction between the capsid and proteins on the cell surface that serve as viral receptors and co-receptors. The capsid mainly dictates which cell types will be infected (cell tropism). Upon binding, the cell engulfs the virus into membrane vesicles (endosomes) typically used to digest and recycle material. The rAAVs escape the endosomes, avoiding digestion, and enter the nucleus, where the capsid releases the single-strand DNA (ssDNA) genome, a process known as uncoating. The ITRs direct the synthesis of the second strand to reconstitute a double-strand DNA (dsDNA), the replication of the viral genome and the concatenation of individual genomes into larger, circular DNA molecules (episomes) that can persist in the host cell for years. Nuclear proteins transcribe the transgene into mRNAs; mRNAs are exported in the cytoplasm where they are translated into proteins. The rAAV has achieved successful transduction : the transgene can start exerting its therapeutic effects. A simplified overview of rAAV transduction is presented in Figure 2 . The triumphs of rAAV gene therapies rAAV gene therapies are improving lives and saving patients. Unsurprisingly, the most remarkable examples of this come from the drugs already approved. Roctavian is an rAAV gene therapy for haemophilia A, a life-threatening bleeding disorder in which the blood does not clot properly because the body cannot produce the coagulation Factor VIII. In a phase III clinical trial, Roctavian reduced bleeding rates by 85% and most treated patients (128 out of 134) no longer needed regular administration of Factor VIII, the standard therapy for the disease, for up to two years after treatment. Similar impressive results were noted for the rAAV Hemgenix, a gene therapy for haemophilia B (a bleeding disorder caused by the absence of the coagulation Factor IX). Hemgenix reduced bleeding rates by 65% and most treated patients (52 out of 54) no longer needed regular administration of Factor IX, for up to two years. The benefits of Zolgensma are even more awe-inspiring. Zolgensma is an rAAV gene therapy for spinal muscular atrophy (SMA), a genetic disorder in which neurons in the spinal cord die causing muscles to waste away irreversibly. The life expectancy of SMA patients can be as short as two years, therefore timing is critical. As a consequence, Zolgensma had to be tested in neonates: babies with the most severe form of SMA were dosed with the drug before six weeks of age and symptoms onset (SPRINT study). After 14 months, all 14 treated babies were alive and breathing without a ventilator, whilst only a quarter of untreated babies did. After 18 months, all 14 could sit without help, an impossible feat without Zolgensma. These and other resounding achievements are fuelling research on rAAVs gene therapies. Current limitations Scientists still have some significant hurdles to overcome : ● Packaging capacity: AAVs can fit in their capsids relatively short DNA sequences, which do not allow the replacement of many long genes associated with genetic disorders, ● Immunogenicity: 30-60% of individuals have antibodies against AAVs, which block rAAVs and prevent transduction, ● Tissue specificity: rAAVs often infect tissues which are not the intended target (e.g., inducing the expression for a transgene to treat a neurological disease in the liver rather than in neurons). Gene therapies, not only those delivered by rAAVs, face an additional challenge, this one only partially of a technological nature: their price tags. Their prices – rAAVs range from $850,000 (£690,000) to $3,500,000 (£2,850,000) – make them inaccessible for most patients. A cautionary tale is already out there: Glybera, the first rAAV gene therapy approved for medical use, albeit only in Europe (2012), was discontinued in 2017 because too expensive. Research is likely to reduce the exorbitant manufacturing costs , but the time may have come to reconsider our healthcare systems. Notes One non-viral vector exists , but its development lags behind the viral vector . Glybera for treating lipoprotein lipase deficiency, Luxturna for Leber congenital amaurosis, Zolgensma for spinal muscular atrophy, Roctavian for haemophilia A, Hemgenix for haemophilia B, and Elevidys for Duchenne muscular dystrophy. Written by Matteo Cortese, PhD Related articles: Germline gene therapy (GGT) / A potential treatment for HIV / Rabies Project Gallery
- Oliver Sacks | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Oliver Sacks A life of neurology and literature If I had to credit one person for introducing me to the subject that would become my career choice, it would be Oliver Sacks. Trying to develop my interests and finding myself in a world of science textbooks that sounded too complicated – and often simply pedantic – made me desperate to find something that could somehow combine my love for science and my fondness for literature. Luckily, I managed to stumble upon “the poet laureate of literature”, a physician who presented real characters with true medical cases without putting a teenage girl to sleep. Oliver Wolf Sacks was born in London in 1933. He grew up in a family of doctors; his mother was one of the first female surgeons in England and his father, a general practitioner. His interest in science started at a young age, experimenting with his home chemistry set. Following in his parents’ footsteps, he went on to study medicine at The University of Oxford before moving to the US for residency opportunities in San Francisco and Los Angeles. Although he enjoyed the sweeter life on the West Coast, by 1965 he decided to take a more permanent residence in New York, where he continued to work as a neurologist as well as eventually teaching at Columbia and NYU. It was in the city of dreams where he started his literary journey. One of his main creative inspirations was born from his time as a consultant neurologist at Beth Abraham Hospital in the Bronx. There, he found a group of patients who had been in a catatonic state due to encephalitis lethargica. They appeared frozen, trapped in their own bodies, unable to come out. Sacks decided to start a series of trials with L-Dopa, a dopamine precursor drug which was then still in the experimental stage as a treatment for Parkinson’s. Almost miraculously, some of the patients started “waking up” and regaining some movement ability. Although the treatment was not without flaws, the satisfaction of helping his patients and the close relationships he came to develop with them after caring for them for months really touched Sacks. In 1973, he published his narration of the events in “Awakenings”, a bestseller that was later adapted into a film of the same name starring Robin Williams and Robert de Niro. Oliver Sacks went on to write about music therapy, a rare community of colourblind individuals and his own experience both as a doctor and as a patient, among others. His most notable works are probably “The Man Who Mistook His Wife for a Hat” and “An Anthropologist on Mars”. Both describe in detail fascinating case studies, ranging from more known conditions such as Parkinson’s, epilepsy and schizophrenia, to other relatively more obscure diagnoses at the time including Tourette’s, musical hallucinations and autism. The condition that took my attention the most when reading “The Man Who Mistook His Wife for a Hat” was that which gives the book its title. The man who could not tell apart his hat from his spouse was diagnosed with agnosia: the inability to recognise objects, people or animals as a result of neurological damage along pathways connecting primary sensory areas. Agnosia can affect visual, auditory, tactile or facial recognition (prosopagnosia), or a combination of these. Crucially, Sacks’s works showcase not only a recount of symptoms and abnormalities, but a tale of people who retained their humanity and individuality beyond their medical diagnoses. As he told People magazine in 1986, he loved to discover potential in people who weren’t thought to have any. Instead of merely fitting patients into disease, he liked. To observe how they experienced the world in their unique ways, recognising difference as a path to resilience rather than just a handicap. Written by Julia Ruiz Rua Project Gallery
- Nanoparticles: the future of diabetes treatment? | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Nanoparticles: the future of diabetes treatment? Nanoparticles have unique properties Diabetes mellitus is a chronic metabolic disorder affecting millions worldwide. Given its myriad challenges, there is a substantial demand for innovative therapeutic strategies in its treatment. The global diabetic population is expected to increase to 439 million by 2030, which will impose a significant burden on healthcare systems. Diabetes occurs when the body cannot produce enough insulin, a hormone crucial for regulating glucose levels in the blood. This deficiency leads to increased glucose levels, causing long-term damage to organs such as the eyes, kidneys, heart, and nervous system, due to defects in insulin function and secretion. Nanoparticles have unique properties making them versatile in their applications and are promising to help revolutionise the future of the treatment of diabetes. This article will explore the potential of this emerging technology in medicine and will address the complexities and issues that arise with the management of diabetes. Nanoparticles have distinct advantages: biocompatibility, bioavailability, targeting efficiency and minimal toxicity, making them ideal for antidiabetic treatment. The drug delivery is targeted, making the delivery precise and efficient, avoiding off-target effects. Modifying nanoparticle surfaces enhances therapeutic efficacy, enabling targeted delivery to specific tissues and cells, while reducing systemic side effects. Another currently researched key benefit is real-time glucose sensing and monitoring, which addresses a critical aspect in managing diabetes, as nanoparticle-based glucose sensors can detect glucose levels with high sensitivity and selectivity. This avoids the use of invasive blood sampling and allows for continuous monitoring of glucose levels. These can be functionalised and integrated into wearable devices, or implanted sensors, making it convenient and reliable to monitor and to be able to optimum insulin therapy. Moreover, nanoparticle-based approaches show potential in tissue regeneration, aiding insulin production restoration. For example, in particular, nanomedicine is a promising tool in theranostics of chronic kidney disease (CKD), where one radioactive drug can diagnose and a second delivers the therapy. The conventional procedure to assess renal fibrosis is by taking a kidney biopsy, which is then followed by a histopathological assessment. This method is risky, invasive, and subjective, and less than 0.01 % of kidney tissue is examined which results in diagnostic errors, limiting the accuracy of the current screening method. The standard use of pharmaceuticals has been promising but can cause hypoglycaemia, diuresis, and malnutrition because of the low caloric intake. Nanoparticles offer a new approach to both diagnosis and treatment and are an attractive candidate for managing CKD as they can carry drugs and enhance image contrast, controlling the rate and location of drug release. In the treatment of this multifaceted disease, nanoparticle delivery systems seem to be a promising and innovative therapeutic strategy, with the variety in the methods of delivery. The range of solutions that are currently being developed are promising, from enhancing the drug delivery to monitoring the glucose level, to direct tissue regeneration. There is immense potential for the advancement of nanomedicines, helping improve patient outcomes, the treatment efficacy, and allowing the alleviation of the burden and side effects of the disorder. With ongoing efforts and innovation, the future treatment of diabetes can be greatly helped with the use of nanoparticles, and these advancements will improve strategies for the management and future treatment of diabetes. Written by Saanchi Agarwal Related articles: Pre-diabetes / Can diabetes mellitus become an epidemic? / Nanomedicine REFERENCES Lemmerman LR, Das D, Higuita-Castro N, Mirmira RG, Gallego-Perez D. Nanomedicine-Based Strategies for Diabetes: Diagnostics, Monitoring, and Treatment. Trends Endocrinol Metab. 2020 Jun;31(6):448-458. doi: 10.1016/j.tem.2020.02.001. Epub 2020 Mar 4. PMID: 32396845; PMCID: PMC7987328. Dehghani P, Rad ME, Zarepour A, Sivakumar PM, Zarrabi A. An Insight into the Polymeric Nanoparticles Applications in Diabetes Diagnosis and Treatment. Mini Rev Med Chem. 2023;23(2):192-216. doi: 10.2174/1389557521666211116123002. PMID: 34784864. Luo XM, Yan C, Feng YM. Nanomedicine for the treatment of diabetes-associated cardiovascular diseases and fibrosis. Adv Drug Deliv Rev. 2021 May;172:234-248. doi: 10.1016/j.addr.2021.01.004. Epub 2021 Jan 5. PMID: 33417981. L. Tillman, T. A. Tabish, N. Kamaly, A. El-Briri F, C. Thiemermann, Z. I. Pranjol and M. M. Yaqoob, Review Advancements in nanomedicines for the detection and treatment of diabetic kidney disease, Biomaterials and Biosystems, 2022, 6, 100047. J. I. Cutler, E. Auyeung and C. A. Mirkin, Spherical nucleic acids, J Am Chem Soc, 2012, 134, 1376–1391. Veiseh, O., Tang, B., Whitehead, K. et al. Managing diabetes with nanomedicine: challenges and opportunities. Nat Rev Drug Discov 14, 45–57 (2015). https://doi.org/10.1038/nrd4477 Project Gallery
- Cities designed to track the heavens: Chaco Canyon, New Mexico | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Cities designed to track the heavens: Chaco Canyon, New Mexico Famous sites in the Chaco Canyon region include Pueblo Alto and Pueblo Bonito This is Article 1 in a series about astro-archaeology. Next article coming soon. In the desert of New Mexico are the remains of a major center of ancestral Puebloan culture. Within the Chaco Canyon region, several places of incredible architecture and complex cultural life have been identified, called Great Houses. It is suggested that over 150 Great Houses were constructed between the 9th and 12th centuries and connected by intricate road systems. Famous sites in the Chaco Canyon region include Pueblo Alto and Pueblo Bonito, which showcase the incredible architectural feats of the culture. Interestingly, scholars have deduced that the Great Houses were not only built to support the forming society, but the details of construction were specific for another reason: astronomy. Often, the structures were oriented according to at least one of three following ways: The south-southeast direction : Researchers suggest that the south-southeast orientation originates from a Snake Myth, which describes the use of a staff and the stars to facilitate migration in the southeast direction. Aligned with the cardinal directions : A great example of this is Pueblo Alto. Built in the 11th century, its main wall aligned within 5° of the EW latitude. Hungo Pavi is less than 5° offset from true NS. Built at horizon calendrical stations: Calendrical stations are often natural structures that, when viewed at a particular location, the sun can be seen in a memorable relation to it. For example, Figure 1 shows the sun between two prominent rock formations. Imagine this occurred only once per year. The event would mark the same day and thus would denote the annual occasion. Many of the ancestral Puebloan Great Houses are understood to have been built near such calendrical stations that operate for different events like the solstices. Although the ancestral Puebloan culture may not have used physics and astronomy as we do now, it was built into the fundamentals of their society, and central to their community. Written by Amber Elinsky REFERENCES & RESOURCES “History and Culture: The Center of Chalcoan Culture.” Chaco Culture, National Park Service . Accessed May 2024. https://www.nps.gov/chcu/learn/historyculture/index.htm . Munro, Andrew M., and J. McKim Malville. “Ancestors and the Sun: Astronomy, Architecture and Culture at Chaco Canyon.” Proceedings of the International Astronomical Union 7, no. S278 (2011): 255–64. https://doi.org/10.1017/S1743921311012683 . Images from nps.gov Project Gallery
- Physics in healthcare | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Physics in healthcare Nuclear medicine When thinking about a career or what to study in university, many students interested in science think that they have to decide between a more academic route or something more vocational, such as medicine. Whilst both paths are highly rewarding, it is possible to mix the two. An example of this is nuclear medicine, allowing physics students to become healthcare professionals. Nuclear medicine is an area of healthcare that involves introducing a radioactive isotope into the system of a patient in order to image their body. A radioactive isotope is an unstable nucleus that decays and emits radiation. This radiation can then be detected, usually by a tool known as a gamma camera. It sounds dangerous, however it is a fantastic tool that allows us to identify abnormalities, view organs in motion and even prevent further spreading of tumours. So, how does the patient receive the isotope? It depends on the scan they are having! The most common route is injection but it is also possible for the patient to inhale or swallow the isotope. Some hospitals give radioactive scrambled eggs or porridge to the patient in gastric emptying imaging. The radioisotope needs to obey some conditions: ● It must have a reasonable half-life. The half-life is the time it takes for the isotope to decay to half of the original activity. If the half-life is too short, the scan will be useless as nothing will be seen. If it is too long, the patient will be radioactive and spread radiation into their immediate surroundings for a long period of time. ● The isotope must be non-toxic. It cannot harm the patient! ● It must be able to biologically attach to the area of the body that is being investigated. If we want to look at bones, there is no point in giving the patient an isotope that goes straight to the stomach. ● It must have radiation of suitable energy. The radiation must be picked up by the cameras and they will be designed to be most efficient over a specific energy range. For gamma cameras, this is around 100-200 keV. Physicists are absolutely essential in nuclear medicine. They have to understand the properties of radiation, run daily quality checks to ensure the scanners are working, they must calibrate devices so that the correct activity of radiation is being given to patients and so much more. It is essential that the safety of patients and healthcare professionals is the first priority when it comes to radiation. With the right people on the job, safety and understanding is the priority of daily tasks. Nuclear medicine is indeed effective and is implemented into standard medicine thanks to the work of physicists. Written by Megan Martin Project Gallery
- NGAL: A Valuable Biomarker for Early Detection of Renal Damage | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link NGAL: A Valuable Biomarker for Early Detection of Renal Damage How kidney damage can be detected Nestled under the ribcage, the kidneys are primarily responsible for the filtration of toxins from the bloodstream and their elimination in urine. In instances of Acute Kidney Injury (AKI), however, this vital function is compromised. AKI is the sudden loss of kidney function, which is commonly seen in hospitalised patients. Because patients don’t usually experience pain or distinct symptoms, AKI is difficult to identify. Early detection of AKI is paramount to prevent kidney damage from progressing into more enduring conditions such as Chronic Kidney Disease (CKD). So, how can we detect AKI promptly? This is where Neutrophil Gelatinase-Associated Lipocalin (NAGL), a promising biomarker for the early detection of renal injury, comes into focus. Until recently, assessing the risk of AKI has relied on measuring changes in serum creatinine (sCr) and urine output. Creatinine is a waste product formed by the muscles. Normally, the kidney filters creatinine and other waste products out of the blood into the urine. Therefore, high serum creatinine levels indicate disruption to kidney function, suggesting AKI. However, a limitation of the sCr test is that it is affected by extrarenal factors such as muscle mass; people with higher muscle mass have higher serum creatinine. Additionally, an increase in this biomarker becomes evident once the renal function is irreversibly damaged. NGAL’s ability to rapidly detect kidney damage hours to days before sCr, renders it a more fitting biomarker to prevent total kidney dysfunction. Among currently proposed biomarkers for AKI, the most notable is NGAL. NGAL is a small protein rapidly induced from the kidney tubule upon insult. It is detected in the bloodstream within hours of renal damage. NGAL levels swiftly rise much before the appearance of other renal markers. Such characteristics render NGAL a promising biomarker in quickly pinpointing kidney damage. The concentration of NGAL present in a patient's urine is determined using a particle-enhanced laboratory technique. This involves quantifying the particles in the solution by measuring the reduced transmitted light intensity through the urine sample. In conclusion, the early detection of AKI remains a critical challenge, but NGAL emerges as a promising biomarker for promptly detecting renal injury before total loss of kidney function unfolds. NGAL offers a significant advantage over traditional biomarkers like serum creatinine- its swift induction upon kidney injury allows clinicians and healthcare providers to intervene before renal dysfunction manifests. Written by Fozia Hassan Related article: Cancer biomarkers and evolution REFERENCES Bioporto. (n.d.). NGAL . [online] Available at: https://bioporto.us/ngal/ [Accessed 5 Feb. 2024]. Branislava Medić, Branislav Rovčanin, Katarina Savić Vujović, Obradović, D., Duric, D. and Milica Prostran (2016). Evaluation of Novel Biomarkers of Acute Kidney Injury: The Possibilities and Limitations. Current Medicinal Chemistry , [online] 23(19). doi: https://doi.org/10.2174/0929867323666160210130256 . Buonafine, M., Martinez-Martinez, E. and Jaisser, F. (2018). More than a simple biomarker: the role of NGAL in cardiovascular and renal diseases. Clinical Science , [online] 132(9), pp.909–923. doi: https://doi.org/10.1042/cs20171592 . Giasson, J., Hua Li, G. and Chen, Y. (2011). Neutrophil Gelatinase-Associated Lipocalin (NGAL) as a New Biomarker for Non – Acute Kidney Injury (AKI) Diseases. Inflammation & Allergy - Drug Targets , [online] 10(4), pp.272–282. doi: https://doi.org/10.2174/187152811796117753 . Haase, M., Devarajan, P., Haase-Fielitz, A., Bellomo, R., Cruz, D.N., Wagener, G., Krawczeski, C.D., Koyner, J.L., Murray, P., Zappitelli, M., Goldstein, S.L., Makris, K., Ronco, C., Martensson, J., Martling, C.-R., Venge, P., Siew, E., Ware, L.B., Ikizler, T.A. and Mertens, P.R. (2011). The Outcome of Neutrophil Gelatinase-Associated Lipocalin-Positive Subclinical Acute Kidney Injury. Journal of the American College of Cardiology , [online] 57(17), pp.1752–1761. doi: https://doi.org/10.1016/j.jacc.2010.11.051 . Moon, J.H., Yoo, K.H. and Yim, H.E. (2020). Urinary Neutrophil Gelatinase – Associated Lipocalin: A Marker of Urinary Tract Infection Among Febrile Children. Clinical and Experimental Pediatrics . doi: https://doi.org/10.3345/cep.2020.01130 . Vijaya Marakala (2022). Neutrophil gelatinase-associated lipocalin (NGAL) in kidney injury – A systematic review. International Journal of Clinical Chemistry and Diagnostic Laboratory Medicine , [online] 536, pp.135–141. doi: https://doi.org/10.1016/j.cca.2022.08.029 . www.nice.org.uk . (2014). Overview | The NGAL Test for early diagnosis of acute kidney injury | Advice | NICE . [online] Available at: https://www.nice.org.uk/advice/mib3 [Accessed 6 Feb. 2024]. Project Gallery
- A concise introduction to Markov chain models | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A concise introduction to Markov chain models How do they work? Introduction A Markov chain is a stochastic process that models a system that transitions from one state to another, where the probability of the next state only depends on the current state and not on the previous history. For example, assuming that X 0 is the current state of a system or process, the probability of a state, X 1 , depends only on X 0 which is of course the current state of the system as stated. P ( X 1 ) = f ( P ( X 0 )) It may be hard to think of any real-life processes that follow this behaviour because there is the belief that all events happen in a sequence because of each other. Here are some examples: Games e.g. chess - If your king is in a certain spot on a chess board, there will be a maximum of 4 transition states that can be achieved that all depend on the initial position of chess piece. The parameters for the Markov model will obviously vary depending on your position on the board which is the essence of the Markov process. Genetics - The genetic code of an organism can be modelled as a Markov chain, where each nucleotide (A, C, G, or T) is a state, and the probability of the next nucleotide depends only on the current one. Text generation - Consider the current state as the most recent word. The transition states would be all possible words which could follow on from said word. Next word prediction algorithms can utilize a first-order Markov process to predict the next word in a sentence based on the most recent word. The text generation example is particularly interesting because only considering the previous word when trying to predict the next word sentence would lead to a very random sentence. That is where we can change things up using various mathematical techniques. k -Order Markov Chains (adding more steps) In a first-order Markov chain, we only consider the immediately preceding state to predict the next state. However, in k-order Markov chains, we broaden our perspective. Here’s how it works: Definition: a k-order Markov chain considers the previous states (or steps) when predicting the next state. It’s like looking further back in time to inform our predictions. Example: suppose we’re modelling the weather. In a first-order Markov chain, we’d only look at today’s weather to predict tomorrow’s weather. But in a second-order Markov chain, we’d consider both today’s and yesterday’s weather. Similarly, a third-order Markov chain would involve three days of historical data. By incorporating more context, k-order chains can capture longer-term dependencies and patterns. As k increases, the model becomes more complex, and we need more data to estimate transition probabilities accurately. See diagram below for a definition of higher order Markov chains. Markov chains for Natural Language Processing A Markov chain can generate text by using a dictionary of words as the states, and the frequency of words in a corpus of text as the transition probabilities. Given an input word, such as "How", the Markov chain can generate the next word, such as "to", by sampling from the probability distribution of words that follow "How" in the corpus. Then, the Markov chain can generate the next word, such as "use", by sampling from the probability distribution of words that follow "to" in the corpus. This process can be repeated until a desired length or end of sentence is reached. That is a basic example and for more complex NLP tasks we can employ more complex Markov models such as k-order, variable, n-gram or even hidden Markov models. Limitations of Markov models Markov models for tasks such as text generation will struggle because they are too simplistic to create text that is intelligent and sometimes even coherent. Here are some reasons why: Fixed Transition Probabilities: Markov models assume that transition probabilities are constant throughout. In reality, language is dynamic, and context can change rapidly. Fixed probabilities may not capture these nuances effectively. Local Dependencies: Markov chains have local dependencies, meaning they only consider a limited context (e.g., the previous word). They don’t capture long-range dependencies or global context. Limited Context Window: Markov models have a fixed context window (e.g., first-order, second order, etc.). If the context extends beyond this window, the model won’t capture it. Sparse Data : Markov models rely on observed data (transition frequencies) from the training corpus. If certain word combinations are rare or absent, the model struggles to estimate accurate probabilities. Lack of Learning: Markov models don’t learn from gradients or backpropagation. They’re based solely on observed statistics. Written by Temi Abbass FURTHER READING 1. “Improving the Markov Chain Approach for Generating Text Used for…” : This work focuses on text generation using Markov chains. It highlights the chance based transition process and the representation of temporal patterns determined by probability over sample observations . 2 . “Synthetic Text Generation for Sentiment Analysis” : This paper discusses text generation using latent Dirichlet allocation (LDA) and a text generator based on Markov chain models. It explores approaches for generating synthetic text for sentiment analysis . 3. “A Systematic Review of Hidden Markov Models and Their Applications” : This review paper provides insights into HMMs, a statistical model designed using a Markov process with hidden states. It discusses their applications in various fields, including robotics, finance, social science, and ecological time series data analysis . Project Gallery