Page 1,776«..1020..1,7751,7761,7771,778..1,7901,800..»

More Than 80% of Ethereum Miners Pull the Plug After Merge – The Defiant – DeFi News

Ethereum Classic Hash Rate plunges 48% Since Shift to Proof of Stake

Eight out of 10 Ethereum miners appear to have gone offline after The Merge, according to data from 2miners, a website tracking the hash rate of Proof of Work networks.

Data shows that many miners are opting to turn off their hardware after soaring hash rates rendered many networks supporting EtHash miners unprofitable.

Im mining at a loss, TheCrowbill, an Ethereum Classic miner, told The Defiant on Sept. 27. Probably will remain that way for some time.

Ethereums chain-merge on Sept. 15 dropped Proof of Work miners from the network in favor of Proof of Stake validators. The move reduced Ethereums energy consumption by 99.8% and prompted miners to unplug an estimated $5B worth of mining hardware.

Ethereum Classic and the newly forked ETHW chain promised to take on a large number of Ethereums former miners. But questions arose as to whether the networks could support a large influx of hash rate without their miners being forced to operate at a loss.

The hash rate of Ethereum Classic, which was tipped to be the top refuge for Ethereum miners, is down 47.6% since peaking near 307 terahashes per second (TH/s) on the day of The Merge, according to 2miners. Ethereum Classics ETC token has skidded 4.7% in the last seven days compared to a 0.5% uptick for Ethereum, according to CoinGecko data.

Despite the pull-back, the networks hash rate is still up 52% compared to when The Defiant last spoke to Ethereum Classic miners and reported EtHash miners could expect negative profits for validating the chain.

With $4B in market cap, Ethereum Classic is the third-ranked Proof of Work network behind Bitcoin and Dogecoin neither of which can support EtHash mining hardware.

Ergo was the chain that enjoyed the second greatest influx of hashing power after The Merge behind Ethereum Classic, with its hash rate spiking 590% to 234 TH/s on merge-day. But, no surprise, the chains profitability plummeted, driving most of its newly accumulated hash rate to abandon the chain.

Ergos hash rate is now just 25.9 TH/s, its lowest level since Sept. 13. Ergo is the 19th-ranked PoW network by market cap with $161M.

ETHW, the Proof of Work Ethereum fork launched by miners on Sept. 16 is also failing to be a refuge for Ethereum miners. Its hash rate immediately spiked to 79.4 TH/s, but then grinded down to a local low below 28 TH/s on Sept. 23.

While its hash rate has since picked up to 45 TH/s, the network still appears ill-suited to support Ethereums miners. It generates $144,500 worth of rewards daily.

Although the network is host to just 5% of the hash rate on Ethereum prior to The Merge, mining rewards are equal to only 0.7% of those issued by Ethereum before The Merge suggesting an 86% drop in revenue.

ETHW is the seventh-largest PoW network by market cap with $1.2B.

Ravencoin is the only Proof of Work network supporting EtHash to retain its post-merge, probably due to the networks rewards adjusting as its hash rate increases and decreases.

RVN has a smooth difficulty adjustment, so it did handle the influx just fine, Tron Black, the president of the Ravencoin Foundation told The Defiant. Black added that the networks difficulty adjusts every minute.

Ravencoins hash rate is now 16.8 TH/s after pulling back about a quarter from its post-merge high. Ravencoin is the 10th ranked PoW network with a $444.4M market cap.

The rest is here:

More Than 80% of Ethereum Miners Pull the Plug After Merge - The Defiant - DeFi News

Read More..

3 Process Mining Methods That Will Unlock Ideal ROI Results – – ReadWrite

What do you use for your process mining methods that will unlock your ideal ROI results? Some CIOs say that the ends justify the means sometimes. By the same token, the business process management methodsa CIO deploys to accomplish an objective can speak volumes about them and their business.

Between cost efficiency and productivity to customer satisfaction and avoiding mistakes or delays,process mining(i.e., using data and automation to analyze and optimize operations) is one particular approach that is layered with benefits for CIOs and their corporate allies.

THE KEY: Similar to process intelligence (aka business intelligence), process mining helps companies make more informed decisions using evidence and data plus, both strategies use KPIs and other data tools.

The difference between process intelligence vs. process mining resides in root-cause analysis.

While process intelligence is more about monitoring and reporting to tell you an activity went wrong, process mining tells you arguably the more important factor: the why.

And that understanding will help CIOs unlock their business processes true potential.

The fluidity of business operations is such that statuses can change on a dime. One of the biggest perks of process mining is the real-time data it provides, allowing CIOs and other C-suite members to adapt quicker.

For enterprise legacy companies, this means modernizing internally amid digital transformation. In fact, about 80% of CFOs in a Gartner studysaid industries such as finance must lean more on solutions like artificial intelligence and robotic process automation to effectively support businesses by 2025.

How does process mining work? By indicating when a process started, showing how it operates, and creating a log that can assess how successful the process is. Process mining applicationsdeliver30-50% gainsin productivity and can improve customer satisfaction by 30%.

Legacy companies trying to catch upand embrace these strategies sometimes struggle. For example, to see the risks or bottlenecks, you need to analyze logged data which legacy companies often dont have.

Plus, as CIOs well know, the IT built for individual departments creates silos that can spark inter-departmental friction, companywide issues, poor customer experience, and lack of employee retention.

And with change always comes bouts of doubt and reluctance. Some CIOs and other leaders might not be prepared for the time and demand adigital transformation requires especially if theyre new employees who dont know the system yet. Fear of the unknown is common in cases like this, but the payout is worth it.

Finally, many legacy companies are not prepared or built to continuously improve or to be governed and regulated. For organizations such as financial institutions, this poses a real issue when their process frameworks arent up to date and dont comply with mandates or new regulations (such as fraud and data protection).

Beyond the potential roadblocks, CIOs need to understand the long-term value of enterprise process management. Process mining automation creates on-demand actions and results. There will be more of an influx of low-code tools to help with this, and process mining vendors will have to decide which routes they want to take with their technologies.

For businesses to succeed, they need a solid BPM platform that looks at the bigger picture and incorporates process mining technology to extract the full picture and identify the risks.

Phased approaches where you introduce new systems and technologies to teams over time will best help leaders understand what the outcomes will be, what the organization is trying to achieve, and how the implementation will hit all goals.

To help CIOs use process mining to unlock returns on technology investments, these methods must be implemented deliberately.

A common worry for a company looking to advance its technology is that failing processes will be detrimental to overall business success. These process inefficiencies are silent killers in business and cant be seen without process mining. This is an imperative strategy before implementing or adding any more technology investments.

Process mining helps your company scale. By investing in process mining technology, you can look at the needs and demands that your organization will require and evaluate future technology solutions. Moreover, you can decide if these opportunities fit in with the to-be state you determined when you mined your processes.

Process mining creates a bigger picture of the as-is state and where inefficiencies live. While it might seem like a costly endeavor, implementing cheaper software that doesnt do the full gambit of process mining will hurt potential ROI and lead to more budget being spent to overhaul or restart an implementation.

Process mining provides an ongoing look at every element of your business operations. To see its true value, ensure you have a well-thought-out rollout process so your companys efficiency can soar.

Featured Image Credit: Provided by the Author; Photo by Carlos Muza; Unsplash; Thank you!

James Gibney is the Global Automation Manager at Mavim International, a Dutch-based organization committed to helping customers manage and improve business processes. Prior to joining Mavim, James worked for various B2B technology companies modernizing marketing technology stacks, administering and managing sales and marketing databases and streamlining internal operations.

Excerpt from:

3 Process Mining Methods That Will Unlock Ideal ROI Results - - ReadWrite

Read More..

Social factors and geopolitical tensions are the major cause of disruption in the mining sector, with ESG the top focus – PR Newswire

LONDON, Sept. 26, 2022 /PRNewswire/ -- ESG issues, geopolitics and climate change are the top three risks/opportunities facing mining and metals companies over the next 12 months, according to global mining leaders surveyed for the 15th edition of the EYTop 10 Business Risks and Opportunities for mining and metals in 2023.

The survey found that ESG's impact is felt across every part of the business as the issue becomes a priority for key stakeholders. Water stewardship is the top ESG risk for 76% of survey respondents as climate change and water scarcity concerns escalate.

Paul Mitchell, EY Global Mining & Metals Leader, says:

"Managing ESG risk is becoming more complex. Miners who get it right can get an edge on competitors in many ways from accessing capital, to securing license to operate, attracting talent and mitigating climate risk."

The study also highlights rising concerns around workplace culture. Bullying and harassment are endemic and tied to ongoing issues around a lack of diversity, inclusiveness and respect.

Mitchell says: "The sector needs to do more to improve health, safety and wellbeing. A balanced approach to managing both critical risks and foundational workplace safety and well-being can help companies build a holistic, robust approach."

Global conflict and geopolitical uncertainty hits the sector

According to the study, global conflict and ongoing disruption are creating new urgency for miners to rethink traditional operating and business models. Geopolitics has risen to number two on the ranking and global volatility is likely to be ongoing, driven by changing governments in key markets, competition between key economies, and a growing tide of resource nationalism.

Mitchell says: "We see evidence that governments are trying to fill revenue gaps created through the COVID-19 pandemic with new or increased mining royalties. For example, Chile plans to introduce copper royalties, and in Australia, the Queensland state government has already increased royalties on coal. For mining and metals companies, the ability to quickly assess the impact of these changes, as well as different alliances, trade flows and governments on business decisions will be critical."

Urgency for better mitigation of climate change risk

The study highlights that mining and metals companies have become progressively better at managing climate risks, but there are still opportunities to improve. Not enough miners are taking action to minimize the physical risks of climate change, which may threaten operations.

Mitchell says: "Many mining and metals companies have committed to highly ambitious decarbonization targets and a sharper focus on reporting emissions, but 2023 will reveal whether the sector is on the trajectory to net zero."

Strengthening Indigenous trust critical to license to operate expectations (LTO)

Falling from number one on the ranking, LTO came in at number four despite its increasing complexity. Miners face new LTO expectations, including building livable communities and forging trusted relationships with Indigenous communities.

Mitchell says: "It is critical for mining companies to go beyond doing what's merely required by law. It's time to commit to furthering truth and reconciliation. Ultimately, reframing LTO as a way of creating long-term value can have a positive impact on the company's brand."

Digital innovation and new business models create opportunities for differentiation

Mining and metals executives surveyed say that data mining and automation, as well as the introduction of an ESG platform to track metrics and reporting, will be the focus of digital investment over the next one to two years.

Digital and data will play an important role in helping miners meet ESG requirements, however, the study highlights that many companies are failing to make the most of the opportunity.

Mitchell says: "We still see some miners taking a siloed approach to implementing technology. An integrated, business-led approach to digital transformation can identify more opportunities to solve some of miners' biggest challenges, including ESG, climate risk, productivity and costs."

New business models can also offer opportunities for miners to reposition for a changing future, with many companies considering the benefits of strategies to rationalize, grow and transform.

Mitchell says: "Companies that scrutinize and shift business models now, can get an edge on competitors, as demand and expectations change."

For the full report, visit here.

Aparna SankaranEY Global Media Relations+44 (0)207 480 245082[emailprotected]

SOURCE EY

Here is the original post:

Social factors and geopolitical tensions are the major cause of disruption in the mining sector, with ESG the top focus - PR Newswire

Read More..

Gold price rebounds as rally in dollar pauses after record high – MINING.COM – MINING.com

Traders are also digesting a slew of economic data for signs that price increases are cooling while they wait for additional comments from Federal Reserve officials this week. US durable goods orders fell 0.2% in August, but the value of core capital goods bookings, which is a proxy for investment in equipment that excludes aircraft and military hardware, rose last month by the most since January, even amid rising interest rates.

Capitulation risk is rising in gold, said TD Securities commodity strategists led by Bart Melek. They expect gold prices to fall further below the $1,600 level in the next stage of the interest-rate hiking cycle as traders are now pricing the potential for higher interest rates to persist for some time.

This week, the market may face fresh volatility from the release of US inflation data and public speaking engagements by Fed Vice Chair Lael Brainard and New York Fed President John Williams.

Spot gold climbed 0.8% to $1,636.02 an ounce at 11:04 a.m. in New York, after falling 1.3% on Monday. The Bloomberg Dollar Spot Index declined 0.2%, after rising to a record at the start of the week. Silver, platinum and palladium all gained.

(ByYvonne Yue Li, with assistance fromSing Yee OngandEddie Spence)

See the article here:

Gold price rebounds as rally in dollar pauses after record high - MINING.COM - MINING.com

Read More..

Aircraft Soft Goods Market to Notice Exponential CAGR Growth of 3.60% with Size, Trends, Revenue Statistics, Demand and Key Players Forecast 2022 To…

Aircraft Soft Goods report analyses the market situation which may change in the coming years. This market report comprises of extensive study about different market segments and regions, emerging trends along with major drivers, challenges and opportunities in the market. It also interprets the growth outlook of the global Aircraft Soft Goods market. A data triangulation method is used in the entire report which involves data mining, analysis of the impact of data variables on the market, and primary (industry expert) validation. This widespread marketing report focuses on the top players in North America, Europe, Asia-Pacific, South America, and Middle East & Africa. Aircraft Soft Goods business report also identifies significant trends and factors driving or inhibiting the market growth.

Aircraft Soft Goods Marketresearch report provides a profound overview of product specification, technology, product type and production analysis considering major factors such as revenue, cost, and gross margin. This marketing report is very crucial in several ways for business growth and to thrive in the market. These strategies mainly include new product launches, expansions, agreements, joint ventures, partnerships, acquisitions, and others that boost their footprints in this market. An influential Aircraft Soft Goods Market report not only lends a hand for intelligent decision making but also better manages marketing of goods and services which leads to growth in the business.The research report on the Aircraft Soft Goods market unearths the competitive terrain of the industry, which is inclusive of organizations like Lantal, FELLFAB, ELeather, Tarkett, Botany Weaving, Aereos, INC., Aircraft Interior Products, Hira Technologies Pvt Ltd and Aerofloor Ltd among other domestic and global players.

Get Sample Copy of the Report to understand the structure of the complete report @ https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-aircraft-soft-goods-market

Summary of the Report

The aircraft soft goods market will reach at an estimated value of 664.02 million and grow at a CAGR of 3.60% in the forecast period of 2021 to 2028. Rise in the development of fabric technology and a higher concentration of aircraft manufacturers is an essential factor driving the aircraft soft goods market.

Soft goods help in delivering artistic value to aircraft interiors and also help airlines to improve the level of comfort to passengers along with better-quality noise absorption and restraining of aircraft vibration. There has been a rise in the significant investment in aircraft soft goods market from airline commerce towards the maintenance and enhancement of soft goods.

In-depth qualitative analyses include identification and investigation of the following aspects:

The trend and outlook of global market is forecast in optimistic, balanced, and conservative view. The balanced (most likely) projection is used to quantify global extended reality market in every aspect of the classification from perspectives of Technology, Component, Device Type, Industry Vertical, End-user, and Region.

Based on technology, the global market is segmented into the following sub-markets with annual revenue for 2021-2027 (historical and forecast) included in each section.

Major Industry Competitors:Global Aircraft Soft Goods Market

The major players covered in aircraft soft goods market report are Anker Technology (UK) Ltd, Tapis Corp, Spectra Interior Products, RAMM AEROSPACE, MOHAWK CARPET, LLC, Intech Aerospace, Hong Kong Aircraft Engineering Company Limited,

Browse in-depth TOC on Aircraft Soft Goods Market60- Tables220- Figures350 Pages

Market Scope, Segments and Forecast of the Aircraft Soft Goods Market

Global Aircraft Soft Goods Market, By Aircraft (Commercial, Regional, Business, Helicopters), Product (Carpets, Seat Covers, Curtains), Material (Wool/Nylon Blend Fabric, Natural Leather, Synthetic Leather, Polyester Fabric), Distribution Channel (OEM, Aftermarket), Country (U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa) Industry Trends and Forecast to 2028

Aircraft Soft Goods Market Scopeand Market Size

The aircraft soft goods market is segmented on the basis of aircraft, product, material and distribution channel. The growth among segments helps you analyse niche pockets of growth and strategies to approach the market and determine your core application areas and the difference in your target markets.

What ideas and concepts are covered in the report?

Table of Contents

Check Table of Contents of This Report @ https://www.databridgemarketresearch.com/toc/?dbmr=global-aircraft-soft-goods-market

Key Pointers of the Report

Data Sources & Methodology

The primary sources involves the industry experts from the Global Aircraft Soft Goods Market including the management organizations, processing organizations, analytics service providers of the industrys value chain. All primary sources were interviewed to gather and authenticate qualitative & quantitative information and determine the future prospects.

Study objectives of Aircraft Soft Goods market research:

Note: If you have any special requirements, please let us know and we will offer you the report as you want.

Do You Have Any Query Or Specific Requirement? Ask to Our Industry[emailprotected]https://www.databridgemarketresearch.com/inquire-before-buying/?dbmr=global-aircraft-soft-goods-market

Browse Related Reports:

About Data Bridge Market Research, Private Ltd

Data Bridge Market ResearchPvtLtdis a multinational management consulting firm with offices in India and Canada. As an innovative and neoteric market analysis and advisory company with unmatched durability level and advanced approaches. We are committed to uncover the best consumer prospects and to foster useful knowledge for your company to succeed in the market.

Data Bridge Market Research is a result of sheer wisdom and practice that was conceived and built-in Pune in the year 2015. The company came into existence from the healthcare department with far fewer employees intending to cover the whole market while providing the best class analysis. Later, the company widened its departments, as well as expands their reach by opening a new office in Gurugram location in the year 2018, where a team of highly qualified personnel joins hands for the growth of the company. Even in the tough times of COVID-19 where the Virus slowed down everything around the world, the dedicated Team of Data Bridge Market Research worked round the clock to provide quality and support to our client base, which also tells about the excellence in our sleeve.

Data Bridge Market Research has over 500 analysts working in different industries. We have catered more than 40% of the fortune 500 companies globally and have a network of more than 5000+ clientele around the globe. Our coverage of industries includes

Contact Us

US: +1 888 387 2818UK: +44 208 089 1725Hong Kong: +852 8192 7475Email [emailprotected]

Read more:

Aircraft Soft Goods Market to Notice Exponential CAGR Growth of 3.60% with Size, Trends, Revenue Statistics, Demand and Key Players Forecast 2022 To...

Read More..

Global Radio, Watches and Other Car Tools 2022 Industry Report – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Radio, Watches And Other Tools For Car: World Trade, Markets And Competitors" report has been added to ResearchAndMarkets.com's offering.

The study provides an in-depth analysis of international trends at a product-specific level.

The study provides an historical and prospective analysis of world trade in the product of interest, with a focus on the major competitor countries and major international markets, segmented by price ranges.

The report aims to provide the user with a summary view of international trade in the chosen product/sector/industry by answering the following questions:

The analysis covers the following areas:

Market Size - The size of total international trade which provides information on the size of different markets

Medium-term Outlook - Forecasts on the possible evolution in the near future of the product's international trade

Relevant Markets - An analysis of the most relevant international markets, segmented by price bands. The focus provides basic information to understand which markets tend to pay a higher price, showing preference for quality

Relevant Competitors - Review of the major competitor countries that play a relevant role in the international supply of the product. The focus provides basic information to understand the competitive strategies implemented by main competitors and evaluate how successful they are

The annual information is the result of the following data mining techniques:

The forecasts are developed from annual historical data and from the latest publication of World Economic Outlook, published at least twice a year by the International Monetary Fund.

The international trade forecasts are the result of econometric models aimed at providing an estimate of the scenario of foreign trade flows and highlighting future threats and opportunities of the industry at an international level.

Key Topics Covered:

1. Product Description

2. World Trade Analysis

3. Markets Analysis:

4. Competitors Analysis:

5. Appendix

For more information about this report visit https://www.researchandmarkets.com/r/v9fdby

More:

Global Radio, Watches and Other Car Tools 2022 Industry Report - ResearchAndMarkets.com - Business Wire

Read More..

Do universities offer a way out of the data skills shortage? FE News – FE News

A 2022 report by Ernst & Young suggests that, while data centricity is a top priority for businesses, organisations are still having trouble filling such roles due to ineffective upskilling programmes and a shortage of talent. Here John Salt, co-founder of the UKs largest dedicated data jobs platform, OnlyDataJobs, explores the real issues behind the skills shortage.

Education institutes hold the key to reducing the skills shortage in data science the UKs National Data Strategy outlines the need for the education system to better prepare those leaving school, further education and university for increasingly data-rich lives and careers.

The strategy suggests that foundational data literacy will be required by all in the future and states that universities will take part in a pilot, on a voluntary basis, which involves testing the most effective way to teach these skills to undergraduates either by offering modules including subjects such as artificial intelligence (AI), cyber and digital skills, or by integrating data skills into other subject areas.

The UK Government also launched new AI and data science conversion courses, which is backed by 24 million of funding from government, universities and industry partners. In 2020, there were 2,500 places available, followed by an additional 2,000 in 2021.

There are currently over 100 universities in the UK offering data analytics courses, including the University of Manchester, which offers an MSc in data science, the University of Strathclyde, which has undergraduate and postgraduate courses, and London Metropolitan University, which introduces students to topics such as statistical modelling, data mining and data visualisation in its data analytics MSc degree. However, even institutes with relevant courses face challenges.

A major issue in higher education is recruiting industry-experienced teaching staff with a good understanding of relevant data skills, especially given the pay gap between the corporate and education sectors. In recent years, organisations such as The Bright Initiative have partnered with UK universities, like Kings College London and the University of Oxford, to improve access to leading data technology and teaching, which goes some way in improving this.

Education institutes can also help address the skills gap by encouraging minorities to join relevant courses. Statistics suggest that females make up only 19 per cent of the UKs tech workforce, and people from black, Asian and minority ethnic (BAME) backgrounds represent just 4 per cent. This is a hugely untapped market, but with scholarships now available through various government initiatives to specifically support applications from diverse backgrounds, we can hope that this percentage increases.

Educational charities can also help with this, in particular by ensuring that undergraduates from less privileged backgrounds can afford to study, and then go on to sustain the best graduate jobs in data and analytics.

Finally, with the world of data science changing every day, new graduates might struggle to find a job role best suited to them. They will likely have trained in a particular programming language, or using a certain technology traditional jobs boards might not have the specific search functions required to find the roles best suited to their skill set.

In comparison, OnlyDataJobs allows job seekers to search for their next role by technology or programming language, such as Python, SQL, Azure or Java. In fact, there are over 70 different technologies or programmes that the user can filter by. It also takes into account the growing requirement for remote roles, and other skills of the candidate, such as additional languages spoken.

Follow this link:

Do universities offer a way out of the data skills shortage? FE News - FE News

Read More..

Predicting Diabetes in Patients with Metabolic Syndrome | DMSO – Dove Medical Press

Introduction

Diabetes has become a major public health burden in China in the 21st century. The prevalence of diabetes in China had increased to 12.8% in 2017.1 Reportedly, China had the highest number of adults with diabetes (140.9 million) in 2021; this number has been projected to increase to 174 million by 2045.2 Since most patients have type 2 diabetes, which is preventable by early interventions, efficient identification of controllable risk factors is crucial to implement prevention and intervention strategies.

Metabolic syndrome (MetS) is defined as a cluster of risk factors for type 2 diabetes and atherosclerotic cardiovascular disease. MetS has become increasingly prevalent worldwide.3,4 Asians are generally considered to have a lower prevalence of MetS as reported to be 24% in China versus 33% in the USA.5,6 However, the MetS prevalence in China has doubled from 2002 to 2012,7 as economic development has changed the lifestyle both in urban and rural areas and resulted in more people being overweight.8 The rapidly increasing prevalence of MetS is leading to more cases of diabetes and medical costs. Lifestyle intervention was proven to be efficient for individuals with MetS to prevent the onset of diabetes,9,10 while unregulated MetS was the strongest risk for new-onset diabetes.11 More aggressive intervention should be carried out in the MetS population.

Traditional risk models have been developed to identify people at high risk and have shown a potential for detecting the onset of diabetes.12 Recently, the successful implementation of information technologies has enhanced the efficiency of the healthcare system. Machine-learning models have been used in the prediction of many common diseases.13 Numerous studies have utilized machine-learning techniques to predict the onset of diabetes and improve diagnostic accuracy.1418 Machine-learning techniques have become a vital instrument in diabetes management for healthcare providers.

In previous studies that used the above-mentioned machine-learning methods, only single time data was used for the models, either for simultaneous diagnosis or for prediction of incident diabetes during follow-up. Only a few studies have used multiple years data or trends of variables to predict diabetes.19,20 To our knowledge, the history of lifestyle changes or different health trajectories may contribute to the risk of future diabetes. By using machine-learning methods with multiple years data, we could construct a more accurate model by taking trajectories into account for a more personalized assessment.

This study focused on individuals with MetS who were at relatively high risk of developing diabetes. By using multiple years data from the annual health examination database, machine-learning models for diabetes prediction were constructed and the prediction performance was compared between multiple-year and single-year models.

This study was conducted in the Health Management Center of Peking Union Medical College Hospital. All physical examination data from subjects were retrospectively gathered from 2008 to 2020 and securely stored in the Peking Union Medical College Hospital Health Management database (PUMCH-HM). The database comprised all participants annual examination records including demographic information, vital signs, laboratory tests, and medical history. The target population in this study were patients with MetS that was defined based on the International Diabetes Federation (IDF) criteria (Table 1).21 Diabetes was diagnosed based on one or more of the following criteria from the American Diabetes Association (ADA):22 fasting plasma glucose (FPG)7.0 mmol/L or glycated hemoglobin (HbA1c) 6.5% or self-reported diabetes diagnosis per healthcare professionals diagnosis. The inclusion criteria were: (1) no diabetes was detected when subjects were diagnosed with MetS in the first year, and (2) the participant had at least 6 years records in the dataset since the first year of MetS diagnosis. A total of 4510 participants (follow-up years: 71.4 years) were extracted from the database and 332 patients developed incident diabetes during the follow-up period. The dataset was comprised of 15 variables from three sessions: demographic information including age, sex, height, weight, body mass index (BMI), waist circumference (WC); vital signs including systolic blood pressure (SBP) and diastolic blood pressure (DBP); and laboratory tests including FPG, HbA1c, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), triglyceride (TG), thyroid-stimulating hormone (TSH), and uric acid (UA). The study was conducted in accordance with the Declaration of Helsinki and was approved by the Peking Union Medical College Hospital Ethics Committee. Informed consent was obtained from all patients included in the study.

Table 1 The Criteria of the International Diabetes Federation (IDF) for the Definition of Metabolic Syndrome (MetS)

The missing percentages of each variable were presented in Table 2. Three variables including WC, HbA1c, and TSH lost data above 30% because they were not collected during the annual health examination until 2014. HDL-C, LDL-C, and UA lost data from 1% to 10% mainly because some participants refused to test them. The other variables were missing at random due to human error and their missing percentages were below 1%.

Table 2 The Missing Percentages of Each Variable

To begin with, it is crucial to impute the missing data, which is often present in medical records. Here, a random forest-based iterative imputation method was applied to the dataset.23 It starts with imputing missing values of the targeted column with the smallest number of missing values. The other non-targeted columns with missing values were initially imputed by the column mean for columns representing numerical variables and the column mode for columns representing categorical variables. Then, a random forest model was fitted in the imputer with the targeted column set as the outcome variable and the remaining columns set as predictors over the complete rows in the targeted column. Subsequently, the missing rows of the targeted columns were predicted using the rows of non-targeted columns as input data in the fitted random forest model. After that, the imputer proceeded to the next targeted column with the second smallest number of missing values in the dataset. The process repeated itself for each column with missing values over multiple iterations until it met the stopping criterion. This stopping criterion was governed by the difference between the imputed arrays over consecutive iterations.

After the imputation of missing data, the outliers were determined through the interquartile range (IQR) method. Q1 represents the 25th percentile and Q3 represents the 75th percentile. IQR is the difference between Q1 and Q3. For the outliers, they were located outside the range between (Q1-1.5*IQR) and (Q3+1.5*IQR). Then the data were also manually examined according to the benchmarks specified by healthcare professionals. This would produce a large bias without removing outliers before the next step, ie, normalization. As each variable has entirely different units and scales, the direct input of these variables into the model will lead to biased prediction results dominated by the variable with the largest variance. Therefore, a simple method of z-score normalization standard scaling was utilized for all the features, which essentially removes the mean and scales to the unit variance. To reflect the yearly fluctuation of all the variables during the follow-up period, multiple additional features named delta_xx for each variable were computed by applying the first-order differential equation over the longitudinal data. Moreover, categorical variables like sex were encoded as 0 for female and 1 for male.

The patient was labeled as 1 (positive) if they were diagnosed with diabetes in the last record; otherwise, the patient was labeled as 0 (negative). Except for the categorical features of sex and height that will remain constant for each participant, the other predictor variables were derived from the statistical values of the other 13 numerical variables. The computation of statistics here contains the average, sum, variance, minimum, and maximum value for each year 1~n data, where n was defined as the number of times of health records starting from the year with the diagnosis of MetS.

The dataset evaluated three popular classification algorithms: logistic regression, random forest, and Xgboost. With Python 3.8, all the classifiers were computed using fixed random state value to ensure consistent results. For logistic regression, the parameter c defining the relative strength of regularization was set as 1 and the regularization approach is L2. For random forest and Xgboost algorithm, the max depth for all trees was set as 6 in the forest and the number of trees was set as 50. As the dataset is significantly biased towards the negative subjects, random down-sampling was applied to the majority class to ensure the balance of the whole dataset. Then, the new dataset was randomly divided into the training (80%) and testing (20%) data. Then least absolute shrinkage and selection operator (LASSO) method was applied to rank the feature importance. The constant alpha that multiplies the L1 term was set as 1 in the LASSO model.

The model was developed to predict the probability of diabetes onset using the health data of the first 3 years. As shown in Figure 1, by using different sets of health data, we developed five models including the single-year models (year 1, year 2, and year 3) and multiple-year models (year 12 and year 13). All the classification models were individually assessed by using the area under the receiver operating characteristic curve (AUROC), recall (also known as sensitivity), and precision. These assessment variables were computed from the confusion matrixa commonly used measure when solving classification problems. Four basic concepts that originated from the confusion matrix are true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Precision is defined by the ratio TP/(TP+FP) measuring the models ability to accurately predict patients developing diabetes, while recall is defined as the ratio TP/(TP+FN) evaluating the ability of the model to label diabetes onset correctly among patients who indeed develop diabetes. F1 score is the harmonic mean of precision and recall, which gives a better measure of the incorrectly classified cases than the accuracy metric. A five-fold stratified cross-validation method was applied to all the classifiers for internal validation, which can avoid overfitting during the training process.

Figure 1 The definition of each longitudinal dataset in the timeline.

The numerical variables at baseline were presented as mean standard deviation (SD) in the summary in Table 2. t-tests were performed for each variable between sub-groups where p<0.05 was counted as a statistically significant difference. All statistical analyses were achieved using Python 3.8. The classification models come from two well-built python packages: scikit-learn and Xgboost.

A total of 4510 patients with MetS were included in the analysis. According to the IDF criteria, the abnormal rate at baseline were WC=48.8%, TG=43.5%, HDL-C=39.4%, SBP/DBP=40.7%, and FPG=28%. In all, 332 patients developed diabetes at the end of the follow-up. All the variables between the two sub-groups exhibited significant differences (Table 3). It is evident that patients with diabetes presented higher FPG (6.431.11 mmol/L) and HbA1c (6.000.65%) than those without diabetes (FPG: 5.370.45 mmol/L; HbA1c: 5.470.30%).

Table 3 Baseline Characteristics of Sub-Groups from Patient Cohorts

The performance results for three single-year models and two multiple-year models are presented in Table 4 and Figure 2. Both random forest and Xgboost models over multiple-year data could achieve relatively high-performance results (mean AUROC>0.85 for two models), while the results from single-year data were slightly worse. Among the models applied to the multiple-year data, the best-performing model was Xgboost. The classification results for single-year data showed a different conclusion wherein the random forest model achieved the best performance (mean AUROC: 0.8350.029, mean recall: 0.7530.001, mean precision: 0.7560.020, mean F1-score: 0.7510.014). The combination of Xgboost model and year 13 dataset showed the best performance results (AUROC: 0.897, recall: 0.831, precision: 0.837, F1-score: 0.834). For both random forest and Xgboost single-year models, AUROC increased from year 1, to year 2, and to year 3 indicating that the latest data provided the best prediction power.

Table 4 Performance Metrics of Machine-Learning Models Using Longitudinal Data

Figure 2 ROC curves of the three models for all the datasets.

Abbreviation: ROC, receiver operating characteristic.

Notes: (A) the ROC curve of logistic regression for single-year models and multiple-year models; (B) the ROC curve of random forest for single-year models and multiple-year models; (C) the ROC curve of Xgboost for single-year models and multiple-year models.

Among all the datasets, the year 3 dataset presented the best average prediction results for all three models (mean AUROC: 0.8410.018, mean recall: 0.7600.031, mean precision: 0.7840.013, mean F1-score: 0.7680.021). The lowest recall (0.608) and precision (0.549) rates were both from the logistic regression model for the year 12 dataset. The model performances exhibited evident improvement with the increasing longitudinal dataset in general, as the AUROC improved with the addition of more data for both random forest (year 13: AUROC=0.893; year 3: AUROC=0.862; year 12: AUROC=0.847; year 2: AUROC=0.838) and Xgboost (year 13: AUROC=0.897; year 3: AUROC=0.833; year 12: AUROC=0.856; year 2: AUROC=0.823) models. The other evaluation parameters including recall rate, precision rate, and F1-score also demonstrated an obvious enhancement following the accumulation of longitudinal data.

The feature importance of each dataset using LASSO was shown in Figure 3. Regardless of the dataset used, the top two features that most influenced the prediction results were FPG and HbA1c or related statistical features that make sense as they were used to define diabetes. In the multiple-year models, the highest FPG had the most important predictive value for the onset of diabetes, followed by HbA1c and BMI. For both multi-year datasets, some features reflecting the fluctuations of yearly change exist among the top 15 features. For the year 12 dataset, the delta of UA ranked sixth, which provides another useful feature for diabetes prediction. The delta of weight ranked fourth in the year 13 dataset.

Figure 3 Feature importance of each dataset using LASSO.

Abbreviations: BMI, body mass index; WC, waist circumference; SBP, systolic blood pressure. DBP, diastolic blood pressure; FPG, fasting plasma glucose; HDLC, high-density lipoprotein cholesterol; LDLC, low-density lipoprotein cholesterol; TG, triglyceride; FPG, fasting plasma glucose; HbA1c, glycated hemoglobin; TSH, thyroid-stimulating hormone; UA, uric acid.

Notes: Parameter Del_xx was abbreviated from delta_xx. Var_xx was abbreviated from Variance_xx.

To our knowledge, this is the first study to use multiple years data to predict the risk of diabetes for patients with MetS. The average AUROC for both random forest and Xgboost models could reach >0.80, indicating the sufficient performance of both classifiers. This study demonstrated overall improved performance metrics with the accumulation of longitudinal data.

Of all the longitudinal datasets used, Xgboost model performed best with the highest AUROC. It also presented greatly similar results in recall and precision rate, which can be considered a well-balanced model. Among the three models, Xgboost was the most sensitive classifier to the longitudinal dataset as its AUROC increased with the addition of more years data and presented the largest variances of AUROC derived from the 3 years. Random forest model, which presented the least variance with different longitudinal datasets, achieved the most stable classification results (AUROC=0.8500.033) through different groups of datasets. Unlike the tree-based models, the logistic regression model worsened when adding more longitudinal data as multi-year dataset inputs more features into the model that may still cause overfit. The logistic regression model may not be a good classifier for the longitudinal dataset prediction. The evidence that gradual increment of performance variables from each single year may suggest that the closer to the outcome year, the more accurate the model can be.

The average performance metrics from multiple years data using random forest and Xgboost were better than those of each single year; this result has clearly shown the considerable benefit of using longitudinal data when predicting the onset of diabetes. Moreover, our results indicated that for individuals with similar clinical parameters, the variation trends of these parameters could change the risk of future diabetes. Models based on longitudinal multiple years data may provide more personalized assessment tools for risk evaluation. Our prediction models exhibited better results than some other longitudinal studies. For instance, Lai et al demonstrated that the Gradient Boost Model (GBM) was best with an AUROC of 0.847 for diabetes prediction.24 In a recently published 13 years longitudinal study, the cumulative exposure of 3 years before baseline was used to predict diabetes by COX regression and the AUC was 0.802.19

In both multiple-year models, we found that the highest FPG was the strongest predictor of diabetes, followed by the mean or lowest level of HbA1c and then, BMI. Decreased thyroid function (by TSH) was also a risk factor in each single-year or multiple-year model except for the year 2 model. This result is consistent with current evidence that suggests an increased type 2 diabetes risk in people with hypothyroidism.25,26 When focusing on deltas that represent the trends of variables, we found that delta weight, delta TSH, delta UA, and delta TG were stronger predictors than delta FPG or HbA1c. Especially in the year 13 model, delta weight was the fourth-most important feature, suggesting that a history of gaining weight is the main risk factor for MetS patients to develop diabetes. The importance of weight loss for diabetes prevention has been proven in several prospective large-scale clinical trials such as the Diabetes Prevention Program (DPP), Finnish Diabetes Study, and the Da Qing Study.2729 Our study provided a new perspective to include the history of weight loss or weight gain into the individualized risk of diabetes.

Our study is limited by its retrospective design and the sample size, as we focused on MetS patients having multiple years health records. Furthermore, our results need to be cautiously extrapolated to the general Chinese population, given that it is a single-center study, and the participants were mostly company employees with a relatively high socioeconomic status from North China. It is essential to validate our proposed model using an external dataset in the future. A good model with sufficient robustness can achieve similar results with various datasets.

To our knowledge, this is the first study to use machine-learning methods based on multiple years data to predict diabetes in MetS patients. This study demonstrated improved performance with the accumulation of longitudinal data. In the multiple-year models, fluctuation of weight and some biomarkers played certain roles. This showed that models based on longitudinal multiple years data may provide more personalized assessment tools for risk evaluation in MetS patients.

We acknowledge all the healthcare workers involved in the establishment of the PUMCH-HM database in Peking Union Medical College Hospital.

The authors report no conflicts of interest in this work.

1. Li Y, Teng D, Shi X, et al. Prevalence of diabetes recorded in mainland China using 2018 diagnostic criteria from the American Diabetes Association: national cross-sectional study. BMJ. 2020;369. doi:10.1136/BMJ.M997

2. International Diabetes Federation. IDF Diabetes Atlas [Internet]. 10th ed. International Diabetes Federation; 2021. Available from: http://www.diabetesatlas.org. Accessed April 18, 2022.

3. Aguilar M, Bhuket T, Torres S, Liu B, Wong RJ. Prevalence of the metabolic syndrome in the United States, 20032012. JAMA. 2015;313(19):1973. doi:10.1001/jama.2015.4260

4. Ford ES, Giles WH, Dietz WH. Prevalence of the metabolic syndrome among US adults. JAMA. 2002;287(3):356. doi:10.1001/jama.287.3.356

5. Hirode G, Wong RJ. Trends in the prevalence of metabolic syndrome in the United States, 20112016. JAMA. 2020;323(24):2526. doi:10.1001/jama.2020.4501

6. Li R, Li W, Lun Z, et al. Prevalence of metabolic syndrome in mainland China: a meta-analysis of published studies. BMC Public Health. 2016;16(1):296. doi:10.1186/s12889-016-2870-y

7. He Y, Li Y, Bai G, et al. Prevalence of metabolic syndrome and individual metabolic abnormalities in China, 20022012. Asia Pac J Clin Nutr. 2019;28(3):621633. doi:10.6133/apjcn.201909_28(3).0023

8. Wu Y. Overweight and obesity in China. BMJ. 2006;333(7564):362363. doi:10.1136/bmj.333.7564.362

9. Lee MK, Han K, Kim MK, et al. Changes in metabolic syndrome and its components and the risk of type 2 diabetes: a nationwide cohort study. Sci Rep. 2020;10(1):2313. doi:10.1038/s41598-020-59203-z

10. Kim D, Yoon SJ, Lim DS, et al. The preventive effects of lifestyle intervention on the occurrence of diabetes mellitus and acute myocardial infarction in metabolic syndrome. Public Health. 2016;139:178182. doi:10.1016/J.PUHE.2016.06.012

11. Ohnishi H, Saitoh S, Akasaka H, Furukawa T, Mori M, Miura T. Impact of longitudinal status change in metabolic syndrome defined by two different criteria on new onset of type 2 diabetes in a general Japanese population: the Tanno-Sobetsu Study. Diabetol Metab Syndr. 2016;8(1). doi:10.1186/S13098-016-0182-0

12. Abbasi A, Peelen LM, Corpeleijn E, et al. Prediction models for risk of developing type 2 diabetes: systematic literature search and independent external validation study. BMJ. 2012;345(sep182):e5900. doi:10.1136/bmj.e5900

13. Dinh A, Miertschin S, Young A, Mohanty SD. A data-driven approach to predicting diabetes and cardiovascular disease with machine learning. BMC Med Inform Decis Mak. 2019;19(1). doi:10.1186/s12911-019-0918-5

14. Pei D, Gong Y, Kang H, Zhang C, Guo Q. Accurate and rapid screening model for potential diabetes mellitus. BMC Med Inform Decis Mak. 2019;19(1):41. doi:10.1186/s12911-019-0790-3

15. Kavakiotis I, Tsave O, Salifoglou A, Maglaveras N, Vlahavas I, Chouvarda I. Machine learning and data mining methods in diabetes research. Comput Struct Biotechnol J. 2017;15:104116. doi:10.1016/j.csbj.2016.12.005

16. Talaei-Khoei A, Wilson JM. Identifying people at risk of developing type 2 diabetes: a comparison of predictive analytics techniques and predictor variables. Int J Med Inform. 2018;119:2238. doi:10.1016/j.ijmedinf.2018.08.008

17. Upadhyaya SG, Murphree DH, Ngufor CG, et al. Automated diabetes case identification using electronic health record data at a tertiary care facility. Mayo Clin Proc. 2017;1(1):100110. doi:10.1016/j.mayocpiqo.2017.04.005

18. Alghamdi M, Al-Mallah M, Keteyian S, Brawner C, Ehrman J, Sakr S. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: the Henry Ford ExercIse Testing (FIT) project. PLoS One. 2017;12(7):e0179805. doi:10.1371/journal.pone.0179805

19. Simon GJ, Peterson KA, Castro MR, Steinbach MS, Kumar V, Caraballo PJ. Predicting diabetes clinical outcomes using longitudinal risk factor trajectories. BMC Med Inform Decis Mak. 2020;20(1):6. doi:10.1186/s12911-019-1009-3

20. Oh W, Kim E, Castro MR, et al. Type 2 diabetes mellitus trajectories and associated risks. Big Data. 2016;4(1):2530. doi:10.1089/big.2015.0029

21. Alberti KGMM, Eckel RH, Grundy SM, et al. Harmonizing the metabolic syndrome: a joint interim statement of the International Diabetes Federation Task Force on Epidemiology and Prevention; National Heart, Lung, and Blood Institute; American Heart Association; World Heart Federation; International Atherosclerosis Society; and International Association for the Study of Obesity. Circulation. 2009;120(16):16401645. doi:10.1161/CIRCULATIONAHA.109.192644

22. American Diabetes Association. 2. Classification and diagnosis of diabetes: standards of medical care in diabetes2022. Diabetes Care. 2022;45(Supplement_1):S17S38. doi:10.2337/dc22-S002

23. Stekhoven DJ, Bhlmann P. MissForest--non-parametric missing value imputation for mixed-type data. Bioinformatics. 2012;28(1):112118. doi:10.1093/BIOINFORMATICS/BTR597

24. Lai H, Huang H, Keshavjee K, Guergachi A, Gao X. Predictive models for diabetes mellitus using machine learning techniques. BMC Endocr Disord. 2019;19(1). doi:10.1186/s12902-019-0436-6

25. Roa Dueas OH, van der Burgh AC, Ittermann T, et al. Thyroid function and the risk of prediabetes and type 2 diabetes. J Clin Endocrinol Metab. 2022;107(6). doi:10.1210/CLINEM/DGAC006

26. Rong F, Dai H, Wu Y, et al. Association between thyroid dysfunction and type 2 diabetes: a meta-analysis of prospective observational studies. BMC Med. 2021;19(1). doi:10.1186/S12916-021-02121-2

27. Diabetes Prevention Program Research Group. 10-year follow-up of diabetes incidence and weight loss in the Diabetes Prevention Program Outcomes Study. Lancet. 2009;374(9702):16771686. doi:10.1016/S0140-6736(09)61457-4

28. Lindstrm J, Ilanne-Parikka P, Peltonen M, et al. Sustained reduction in the incidence of type 2 diabetes by lifestyle intervention: follow-up of the Finnish Diabetes Prevention Study. Lancet. 2006;368(9548):16731679. doi:10.1016/S0140-6736(06)69701-8

29. Li G, Zhang P, Wang J, et al. Cardiovascular mortality, all-cause mortality, and diabetes incidence after lifestyle intervention for people with impaired glucose tolerance in the Da Qing Diabetes Prevention Study: a 23-year follow-up study. Lancet Diabetes Endocrinol. 2014;2(6):474480. doi:10.1016/S2213-8587(14)70057-9

See the original post:

Predicting Diabetes in Patients with Metabolic Syndrome | DMSO - Dove Medical Press

Read More..

Humanitys fight against Covid: The promise of artificial intelligence – Times of India

Few know that Coronavirus and its allied disease Covid-19 was first discovered by a data-mining program. HealthMap, a website run by Boston Childrens Hospital, raised an alarm about multiple cases of pneumonia in Wuhan, China, rating its urgency at three on a scale of five. Soon after this discovery, the pandemic hit the world like a tsunami. As it progressed, Governments struggled to deal with the unprecedented crisis on multiple fronts and were forced to look at innovative ways to augment their efforts; presenting an opportunity to leverage Artificial Intelligence (AI).

AI was used in varied settings including drug discovery, testing, prevention and overcoming resource constraints, and its success opened a whole new door of possibilities. Heres a look at some of the most intuitive, innovative and advantageous uses of the technology during COVID-19, outlined under the four categories of diagnosis and prognosis, prediction and tracking, patient care and drug development:

Diagnosis and prognosis of COVID-19 using AI

AI assistance in prediction and tracking of Covid-19

AI-backed, superior care for COVID-19 patients

In Xinchang County, China, the drones delivered medical supplies to centers in need, and thermal sensing drones 14 identified people running fever, potentially infected with the virus.

Drug development with AI

There are several mechanisms, where AI accelerated the research on Covid 19. The key thing to note is much of the cutting-edge research is open source and thus available to the scientific and medical research community for further development or consumption.

Predictions for quicker vaccine development: Messenger RNA (mRNA) has a secondary structure that instructs cells to make proteins. Understanding the instruction and protein translation was key to the development of mRNA vaccine. However, mRNAs have a short half-life and degrade rapidly. This impacts the structural analysis of the virus. Access to quick viral structural analysis was significant to shorten the time it takes to design a potential mRNA vaccine with higher stability and better effectiveness, providing an opportunity to save thousands of lives.

Baidu AI team deployed Linearfold 2 , a model to predict the secondary structure prediction for the COVID-19 RNA sequence, reducing overall analysis time from 55 minutes to 27 seconds.Baidu also released the model for public use.

Challenges and the future of AI in health care

An AI solution must undergo a wide range of conditions and edge scenarios before it is deemed fit for use in terms of fairness, reliability, accountability, privacy, transparency, and safety. It also requires continuous monitoring of output vis--vis the everchanging real world, so it can learn from it.

Finally, policy formulation must support adoption of technology but tread with caution. FDA is actively working with stakeholders to define a comprehensive lifecycle-based framework that addresses the use of these technologies in medical care. This evolving framework differs significantly from FDAs own traditional regulatory control paradigm.

Artificial Intelligence has proved its value during the pandemic and holds much promise for mitigating future health care crises. However, this is just a start and the possibilities for intelligent care are limitless. This makes AI in health care an area of great opportunity for talented technologists who are also passionate about making an impact on people and communities through their work. Based on lessons learnt from the use of AI during the pandemic, policy makers, research institutes, businesses and technologists must incorporate these learnings as they chart the way forward.

Views expressed above are the author's own.

END OF ARTICLE

Read more from the original source:

Humanitys fight against Covid: The promise of artificial intelligence - Times of India

Read More..

The biggest problem with gravity and quantum physics – Big Think

No matter what you may have heard, make no mistake: physics is not over in any sense of the word. As far as weve come in our attempts to make sense of the world and Universe around us and we have come impressively far its absolutely disingenuous to pretend that weve solved and understood the natural world around us in any sort of satisfactory sense. We have two theories that work incredibly well: in all the years weve been testing them, weve never found a single observation or made a single experimental measurement thats conflicted with either Einsteins General Relativity or with the Standard Models predictions from quantum field theory.

If you want to know how gravitation works or what its effects on any object in the Universe will be, General Relativity has yet to let us down. From tabletop experiments to atomic clocks to celestial mechanics to gravitational lensing the formation of the great cosmic web, its success rate is 100%. Similarly, for any particle physics experiment or interaction conceivable, whether mediated via the strong, weak, or electromagnetic force, the Standard Models predictions have always been found to agree with the results. In their own realms, General Relativity and the Standard Model can each lay claim to be the most successful physics theory of all-time.

But theres a huge fundamental problem at the heart of both of them: they simply dont work together. If you want your Universe to be consistent, this situation simply wont do. Heres the fundamental problem at the heart of physics in the 21st century.

Countless scientific tests of Einsteins General Theory of Relativity have been performed, subjecting the idea to some of the most stringent constraints ever obtained by humanity. Einsteins first solution was for the weak-field limit around a single mass, like the Sun; he applied these results to our Solar System with dramatic success. Very quickly, a handful of exact solutions were found thereafter.

On the one hand, General Relativity, our theory of gravity, was a radical concept when it first came out: so radical that it was attacked by many on both philosophical and physical grounds for many decades.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Regardless of how anyone might have felt about the new picture that Einsteins greatest achievement, the general theory of relativity, brought along with it, the behavior of physical phenomena in the Universe doesnt lie. Based on a whole suite of experiments and observations, General Relativity has proven to be a remarkably successful description of the Universe, succeeding under every conceivable condition that weve been able to test, whereas no other alternative does.

The results of the 1919 Eddington expedition showed, conclusively, that the General Theory of Relativity described the bending of starlight around massive objects, overthrowing the Newtonian picture. This was the first observational confirmation of Einsteins theory of gravity.

What General Relativity tells us is that the matter-and-energy in the Universe specifically, the energy density, the pressure, the momentum density, and the shear stress present throughout spacetime determines the amount and type of spacetime curvature thats present in all four dimensions: the three spatial dimensions as well as the time dimension. As a result of this spacetime curvature, all entities that exist in this spacetime, including (but not limited to) all massive and massless particles, move not necessarily along straight lines, but rather along geodesics: the shortest paths between any two points defined by the curved space between them, rather than an (incorrectly) assumed flat space.

Where spatial curvature is large, the deviations from straight-line paths are large, and the rate at which time passes can dilate significantly as well. Experiments and observations in laboratories, in our Solar System, and on galactic and cosmic scales all bear this out in great agreement with General Relativitys predictions, lending further support to the theory.

Only this picture of the Universe, at least so far, works to describe gravitation. Space and time are treated as continuous, not discrete, entities, and this geometric construction is required to serve as the background spacetime in which all interactions, including gravitation, take place.

The particles and antiparticles of the Standard Model obey all sorts of conservation laws, but also display fundamental differences between fermionic particles and antiparticles and bosonic ones. While theres only one copy of the bosonic contents of the Standard Model, there are three generations of Standard Model fermions. Nobody knows why.

On the other hand, theres the Standard Model of particle physics. Originally formulated under the assumptions that neutrinos were massless entities, the Standard Model is based on quantum field theory, where there are:

The electromagnetic force is based on electric charges, and so all six of the quarks and the three charged leptons (electron, muon, and tau) all experience the electromagnetic force, whereas the massless photon mediates it.

The strong nuclear force is based on color charges, and only the six quarks possess them. There are eight massless gluons that mediate the strong force, and no other particles are involved in it.

The weak nuclear force, meanwhile, is based on weak hypercharge and weak isospin, and all of the fermions possess at least one of them. The weak interaction is mediated by the W-and-Z bosons, and the W bosons also possess electric charges, meaning they experience the electromagnetic force (and can exchange photons) as well.

The inherent width, or half the width of the peak in the above image when youre halfway to the crest of the peak, is measured to be 2.5 GeV: an inherent uncertainty of about +/- 3% of the total mass. The mass of the particle in question, the Z boson, is peaked at 91.187 GeV, but that mass is inherently uncertain by a significant amount owing to its excessively short lifetime. This result it remarkably consistent with Standard Model predictions.

Theres a rule in quantum physics that all identical quantum states are indistinguishable from one another, and that enables them to mix together. Quark mixing was expected and then confirmed, with the weak interaction determining various parameters of this mixing. Once we learned that neutrinos were massive, not massless as originally expected, we realized that the same type of mixing must occur for neutrinos, also determined by the weak interactions. This set of interactions the electromagnetic, weak, and strong nuclear forces, acting upon the particles that have the relevant and necessary charges describes everything that one could want to predict particle behavior under any imaginable conditions.

And the conditions weve tested them under are extraordinary. From cosmic ray experiments to radioactive decay experiments to solar experiments to high-energy physics experiments involving particle colliders, the Standard Models predictions have agreed with every single such experiment ever performed. Once the Higgs boson was discovered, it confirmed our picture that the electromagnetic and weak force were once unified at high energies into the electroweak force, which was the ultimate test of the Standard Model. In all of physics history, theres never been a result the Standard Model couldnt explain.

Today, Feynman diagrams are used in calculating every fundamental interaction spanning the strong, weak, and electromagnetic forces, including in high-energy and low-temperature/condensed conditions. The electromagnetic interactions, shown here, are all governed by a single force-carrying particle: the photon, but weak, strong, and Higgs couplings can also occur. These calculations are difficult to perform, but are still far more complicated in curved, rather than flat, space.

But theres a catch. All of the Standard Model calculations we perform are based on particles that exist in the Universe, which means they exist in spacetime. The calculations we typically perform are done so under the assumption that spacetime is flat: an assumption that we know is technically wrong, but one thats so useful (because calculations in curved spacetime are so much more difficult than they are in flat space) and such a good approximation to the conditions we find on Earth that we plow ahead and make this approximation anyway.

After all, this is one of the great methods we use in physics: we model our system in as simple a fashion as possible in order to capture all of the relevant effects that will determine the outcome of an experiment or measurement. Saying Im doing my high-energy physics calculations in flat spacetime rather than in curved spacetime doesnt give you an appreciably different answer except in the most extreme conditions.

But extreme conditions do exist in the Universe: in the spacetime around a black hole, for example. Under those conditions, we can determine that using a flat spacetime background is simply no good, and were compelled to take on the herculean task of performing our quantum field theory calculations in curved space.

Inside a black hole, the spacetime curvature is so large that light cannot escape, nor can particles, under any circumstances. Although we lack an understanding of what happens at the central singularities of black holes themselves, Einsteins General Relativity is sufficient for describing the curvature of space more than a few Planck lengths away from the singularity itself.

It might surprise you that, in principle, this isnt really all that difficult. All you have to do is replace the flat spacetime background you normally use for performing your calculations with the curved background as described by General Relativity. After all, if you know how your spacetime is curved, you can write down the equations for the background, and if you know what quanta/particles you have, you can write down the remaining terms describing the interactions between them in that spacetime. The rest, although its quite difficult in practice under most circumstances, is simply a matter of computational power.

You can describe, for example, how the quantum vacuum behaves inside and outside of a black holes event horizon. Because youre in a region where spacetime is more severely curved the closer you are to a black holes singularity, the quantum vacuum differs in a calculable way. The difference in what the vacuum state is in different regions of space particularly in the presence of a horizon, whether a cosmological or an event horizon leads to the production of radiation and particle-antiparticle pairs wherever quantum fields are present. This is the fundamental reason behind Hawking radiation: the reason that black holes, in a quantum Universe, are fundamentally unstable and will eventually decay.

Although no light can escape from inside a black holes event horizon, the curved space outside of it results in a difference between the vacuum state at different points near the event horizon, leading to the emission of radiation via quantum processes. This is where Hawking radiation comes from, and for the tiniest-mass black holes, Hawking radiation will lead to their complete decay in under a fraction-of-a-second. For even the largest mass black holes, survival beyond 10^103 years or so is impossible due to this exact process.

Thats as far as we can go, however, and that doesnt take us everywhere. Yes, we can make the Standard Model and General Relativity play nice in this fashion, but this only allows us to calculate how the fundamental forces work in strongly curved spacetimes that are sufficiently far away from singularities, like those at the centers of black holes or in theory at the very beginning of the Universe, assuming that such a beginning exists.

The maddening reason is that gravity affects all types of matter and energy. Everything is affected by gravitation, including, in theory, whatever types of particles are ultimately responsible for gravitation. Given that light, which is an electromagnetic wave, is made up of individual quanta in the form of photons, we assume that gravitational waves are made up of quanta in the form of gravitons, which we even know many of the particle properties of in the absence of a full quantum theory of gravitation.

But thats precisely what we need. Thats the missing piece: a quantum theory of gravity. Without it, we cannot understand or predict any of the quantum properties of gravity. And before you say, What if they dont exist? know that wouldnt paint a consistent picture of reality.

Results of a double-slit-experiment performed by Dr. Tonomura showing the build-up of an interference pattern of single electrons. If the path of which slit each electron passes through is measured, the interference pattern is destroyed, leading to two piles instead. The number of electrons in each panel are 11 (a), 200 (b), 6000 (c), 40000 (d), and 140000 (e).

For example, consider the most inherently quantum of all the quantum experiments that have ever been performed: the double slit experiment. If you send a single quantum particle through the apparatus and you observe which slit it goes through as it goes through it, the outcome is completely determined, as the particle behaves as though it

the slit you observed it to go through at every step of the way. If that particle was an electron, you could determine what its electric and magnetic fields were during its entire journey. You could also determine what its gravitational field was (or equivalently, what its effects on the curvature of spacetime were) at every moment as well.

But what if you dont observe which slit it goes through? Now the electrons position is indeterminate until it gets to the screen, and only then can you determine where it is. Along its journey, even after you make that critical measurement, its past trajectory is not fully determined. Because of the power of quantum field theory (for electromagnetism), we can determine what its electric field was. But because we dont have a quantum theory of gravitation, we cannot determine its gravitational field or effects. In this sense as well as at small, quantum fluctuation-rich scales or at singularities in which classical General Relativity gives only nonsense answers we dont fully understand gravitation.

Quantum gravity tries to combine Einsteins General theory of Relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. Whether space (or time) itself is discrete or continuous is not yet decided, as is the question of whether gravity is quantized at all, or particles, as we know them today, are fundamental or not. But if we hope for a fundamental theory of everything, it must include quantized fields, which General Relativity does not do on its own.

This works both ways: because we dont understand gravitation at a quantum level, that means we dont quite understand the quantum vacuum itself. The quantum vacuum, or the properties of empty space, is something that can be measured in various ways. The Casimir effect, for instance, lets us measure the effect of the electromagnetic interaction through empty space under a variety of setups, simply by changing the configuration of conductors. The expansion of the Universe, if we measure it over all of our cosmic history, reveals to us the cumulative contributions of all of the forces to the zero-point energy of space: the quantum vacuum.

But can we quantify the quantum contributions of gravitation to the quantum vacuum in any way?

Not a chance. We dont understand how to calculate gravitys behavior at high energies, at small scales, near singularities, or when quantum particles exhibit their inherently quantum nature. Similarly, we dont understand how the quantum field that underpins gravity assuming there is one behaves at all under any circumstances. This is why attempts to understand gravity at a more fundamental level must not be abandoned, even if everything were doing now turns out to be wrong. Weve actually managed to identify the key problem that needs to be solved to push physics forward beyond its current limitations: a huge achievement that should never be underestimated. The only options are to keep trying or give up. Even if all of our attempts turn out to ultimately be in vain, its better than the alternative.

Excerpt from:

The biggest problem with gravity and quantum physics - Big Think

Read More..