Page 1,405«..1020..1,4041,4051,4061,407..1,4101,420..»

5 unique jobs at Protective with openings now – Bham Now

Sponsored

Dreaming of working in a culture you love? We spoke with five people working in unique roles within Birmingham-based Protective and were surprised to see clear themes emerging from what they love about their jobs. Keep reading for all the details.

Blanton DePalma assists Protectives claims, asset protection division and mail center business areas in addressing their training needs.

The job requires intentional organization, advanced planning and a healthy mix of structure and flexibility to provide the level of support Protectives business colleagues require.

What does he wish hed known before joining Protective? How enthusiastic everyone on his team is. He finds their outlook contagious and energizing.

Looking for a new role? Protective is hiring.

Louise Ritter manages a team of Product Owners and Scrum Masters who work with Information Technology (IT) teams to deliver quality solutions as efficiently as possible.

As a people person her favorite aspect of the job is getting to meet and work with so many different personalities. She likes problem-solving and helping to enable others success.

I remember when I first joined six years ago, being really impressed by the amount of pride our employees associate with being part of Protective. We have a very strong culture, and our employees are at the center of that. Protective is invested in giving back in so many ways, which inspires a level of commitment you dont see everywhere.

Aquilla Jackson leads a solution team that supports those who sell Protectives products directly to the customer.

Case Managers are responsible for the initial review of life insurance applications, as well as establishing and maintaining internal and external distribution team relationships. She loves that each day is different and gives her the opportunity to learn something new. She enjoys leading people and building relationships.

I love, love, love that the culture of Protective is family. In New Business, we pride ourselves in having a sense of family. Aside from our real families, we spend the most time with our coworkers. It feels amazing to show up to a place every day knowing that you have the same level of support here as you do at home.

Raja Chakarvorty is the Chief Data Scientist at Protective, leading the Enterprise Data Science team and function. His team analyzes large amounts of data using advanced analytics techniques to create a best-in-class customer experience and a targeted sales process with superior risk selection.

His expertise is to build data science teams and capabilities from the ground up. Protective offered him that opportunity, which was very aligned with his passion.

Data is our lifeline, and it comes to us in broadly two formsstructured (rows and columns) and unstructured (pdfs, text, voice, images). Without data I cannot do any science! I have a motto No data left behind!

He finds that recruiting, training and growing the team has been unexpectedly rewarding.

Because theyre a fast-growing team, his role challenges him in many ways.

The innovation and business value creation invigorates me and drives me to get more business for my team. I love that!

Before applying and relocating from Hartford, Connecticut, he wishes hed known what incredible people he would get to work with each day and how cosmopolitan the city of Birmingham is.

Trenaye Baileys role enables a portion of the marketing team to achieve their goals by effectively utilizing print, tradeshows, multi-touch campaigns and digital media to increase sales productivity and generate leads for financial professionals.

She also specializes in ensuring they deliver tools, collateral and digital content that makes it easier for financial professionals to do business with and for Protective.

This role challenges her because no day is the same. This is mainly due to the growth of the company and the changing needs of the audiences.

I wish before applying, I would have known about the great culture and great people that work at Protective. I honestly wish I would have applied for Protective fresh from college. Protective cultivates their employees and treats them like family.

Protective is hiring. Apply today.

Sponsored by:

Protective refers to Protective Life Corporation, and insurance company subsidiaries including Protective Life Insurance Company (Nashville, TN) and Protective Life and Annuity Insurance Company (Birmingham, AL). Protective is a registered trademark of Protective Life Insurance Company.

See the original post here:

5 unique jobs at Protective with openings now - Bham Now

Read More..

Mathematics student accepted into National Science Foundation … – Clarksville Now

CLARKSVILLE, TN Christine Jator, a student in Austin Peay State Universitys Department of Mathematics and Statistics, has been accepted into the prestigious National Science Foundation Research Experiences (REU) for Undergraduates program at Southern Methodist University in Dallas, Texas. The program is designed to provide students with first-hand research and coding experience in the field of data science.

Jator will take part in the programs research on solving environmental problems facing urban neighborhoods. This opportunity will give her a chance to see what working in the data science industry looks like and to gain valuable knowledge and skills that will benefit her future career aspirations.

I am thrilled to have been accepted into this program, Jator said. I have always been passionate about finding solutions to environmental issues, and this program will allow me to work towards that goal while gaining valuable experience in the field of data science.

Jators professors, Dr. Ramanjit Sahi and Dr. Matt Jones, recommended the program to her and provided links to summer REUs and internships. With the recommendation letter of Jones, a professor in the Department of Mathematics and Statistics, Jator secured her spot in the program.

The NSF REU program at Southern Methodist University is renowned for its data science program, making it a prime destination for students who are passionate about the field. Jator is excited to be a part of this program and looks forward to contributing to its research efforts.

Follow this link:

Mathematics student accepted into National Science Foundation ... - Clarksville Now

Read More..

Machine learning: As AI tools gain heft, the jobs that could be at stake – The Indian Express

Watch out for the man with the silicon chipHold on to your job with a good firm gripCause if you dont youll have had your chipsThe same as my old man

Scottish revival singer-songwriter Ewan MacColls 1986 track My Old Man was an ode to his father, an iron-moulder who faced an existential threat to his job because of the advent of technology. The lyrics could finds some resonance nearly four decades on, as industry leaders and tech stalwarts predict the advancement in large language models such as OpenAIs GPT-4 and their ability to write essays, code, and do maths with greater accuracy and consistency, heralding a fundamental tech shift; almost as significant as the creation of the integrated circuit, the personal computer, the web browser or the smartphone. But there still are question marks over how advanced chatbots could impact the job market. And if the blue collar work was the focus of MacColls ballad, artificial intelligence (AI) models of the generative pretrained transformer type signify a greater threat for white collar workers, as more powerful word-predicting neural networks that manage to carry out a series of operations on arrays of inputs end up producing output that is significantly humanlike. So, will this latest wave impact the current level of employment?

According to Goldman Sachs economists Joseph Briggs and Devesh Kodnani, the answer is a resounding yes, and they predict that as many as 300 million full-time jobs around the world are set to get automated, with workers replaced by machines or AI systems. What lends credence to this stark prediction is the new wave of AI, especially large language models that include neural networks such as Microsoft-backed OPenAIs ChatGPT.

The Goldman Sachs economists predict that such technology could bring significant disruption to the labour market, with lawyers, economists, writers, and administrative staff among those projected to be at greatest risk of becoming redundant. In a new report, The Potentially Large Effects of Artificial Intelligence on Economic Growth, they calculate that approximately two-thirds of jobs in the US and Europe are set to be exposed to AI automation, to various degrees.

In general white-collar workers, and workers in advanced economies in general, are projected to be at a greater risk than blue collar workers in developing countries. The combination of significant labour cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labour productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer, the report said.

And OpenAI itself predicts that a vast majority of workers will have at least part of their jobs automated by GPT models. In a study published on the arXiv preprint server, researchers from OpenAI and the University of Pennsylvania said that 80 percent of the US workforce could have at least 10 percent of their tasks affected by the introduction of GPTs.

Central to these predictions is the way models such as ChatGPT get better with more usage GPT stands for Generative Pre-trained Transformer and is a marker for how the platform works; being pre-trained by human developers initially and then primed to learn for itself as more and more queries are posed by users to it. The OpenAI study also said that around 19 per cent of US workers will see at least 50 per cent of their tasks impacted, with the qualifier that GPT exposure is likely greater for higher-income jobs, but spans across almost all industries. These models, the OpenAI study said, will end up as general-purpose technologies like the steam engine or the printing press.

A January 2023 paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, explored the question of whether AI tools or humans were more effective at helping people lose weight. The authors conducted the first causal evaluation of the effectiveness of human vs. AI tools in helping consumers achieve their health outcomes in a real-world setting by comparing the weight loss outcomes achieved by users of a mobile app, some of whom used only an AI coach while others used a human coach as well.

Interestingly, while human coaches scored higher broadly, users with a higher BMI did not fare as well with a human coach as those who weighed less.

The results of our analysis can extend beyond the narrow domain of weight loss apps to that of healthcare domains more generally. We document that human coaches do better than AI coaches in helping consumers achieve their weight loss goals. Importantly, there are significant differences in this effect across different consumer groups. This suggests that a one-size-fits-all approach might not be most effective Kapoor told The Indian Express.

The findings: Human coaches help consumers achieve their goals better than AI coaches for consumers below the median BMI relative to consumers who have above-median BMI. Human coaches help consumers achieve their goals better than AI coaches for consumers below the median age relative to consumers who have above-median age.

Human coaches help consumers achieve their goals better than AI coaches for consumers below the median time in a spell relative to consumers who spent above-median time in a spell. Further, human coaches help consumers achieve their goals better than AI coaches for female consumers relative to male consumers.

While Kapoor said the paper did not go deeper into the why of the effectiveness of AI+Human plans for low BMI individuals over high BMI individuals, he speculated on what could be the reasons for that trend: Humans can feel emotions like shame and guilt while dealing with other humans. This is not always true, but in general and theres ample evidence to suggest this research has shown that individuals feel shameful while purchasing contraceptives and also while consuming high-calorie indulgent food items. Therefore, high BMI individuals might find it difficult to interact with other human coaches. This doesnt mean that health tech platforms shouldnt suggest human plans for high BMI individuals. Instead, they can focus on (1) Training their coaches well to make the high BMI individuals feel comfortable and heard and (2) deciding the optimal mix of the AI and Human components of the guidance for weight loss, he added.

Similarly, the female consumers responding well to the human coaches can be attributed to the recent advancements in the literature on Human AI interaction, which suggests that the adoption of AI is different for females/males and also theres differential adoption across ages, Kapoor said, adding that this can be a potential reason for the differential impact of human coaches for females over males.

An earlier OECD paper on AI and employment titled New Evidence from Occupations most exposed to AI asserted that the impact of these tools would be skewed in favour of high-skilled, white-collar ones, including: business professionals; managers; science and engineering professionals; and legal, social and cultural professionals.

This contrasts with the impact of previous automating technologies, which have tended to take over primarily routine tasks performed by lower-skilled workers. The 2021 study noted that higher exposure to AI may be a good thing for workers, as long as they have the skills to use these technologies effectively. The research found that over the period 2012-19, greater exposure to AI was associated with higher employment in occupations where computer use is high, suggesting that workers who have strong digital skills may have a greater ability to adapt to and use AI at work and, hence, to reap the benefits that these technologies bring. By contrast, there is some indication that higher exposure to AI is associated with lower growth in average hours worked in occupations where computer use is low. On the whole, the study findings suggested that the adoption of AI may increase labour market disparities between workers who have the skills to use AI effectively and those who do not. Making sure that workers have the right skills to work with new technologies is therefore a key policy challenge, which policymakers will increasingly have to grapple with.

See the article here:
Machine learning: As AI tools gain heft, the jobs that could be at stake - The Indian Express

Read More..

Hydrogen’s Hidden Phase: Machine Learning Unlocks the Secrets of the Universe’s Most Abundant Element – SciTechDaily

Phases of solid hydrogen. The left is the well-studied hexagonal close packed phase, while the right is the new phase predicted by the authors machine learning-informed simulations. Image by Wesley Moore. Credit: The Grainger College of Engineering at the University of Illinois Urbana-Champaign

Putting hydrogen on solid ground: simulations with a machine learning model predict a new phase of solid hydrogen.

A machine-learning technique developed by University of Illinois Urbana-Champaign researchers has revealed a previously undiscovered high-pressure solid hydrogen phase, offering insights into hydrogens behavior under extreme conditions and the composition of gaseous planets like Jupiter and Saturn.

Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.

Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine-learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.

Machine learning turned out to teach us a great deal, Ceperley said. We had been seeing signs of new behavior in our previous simulations, but we didnt trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see whats really going on.

Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.

To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.

The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packedCeperley compared it to stacked orangesthe researchers observed a phase where the molecules become oblong figuresCeperley described them as egg-like.

We started with the not-too-ambitious goal of refining the theory of something we know about, Jensen recalled. Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory.

To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.

This work has started a conversation between Ceperleys collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogens behavior under extreme conditions.

Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogens simplicity makes the substance important to study. We want to understand everything, so we should start with systems that we can attack, he said. Hydrogen is simple, so its worth knowing that we can deal with it.

Reference: Stable Solid Molecular Hydrogen above 900 K from a Machine-Learned Potential Trained with Diffusion Quantum Monte Carlo by Hongwei Niu, Yubo Yang, Scott Jensen, Markus Holzmann, Carlo Pierleoni and David M. Ceperley, 17 February 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.076102

This work was done in collaboration with Markus Holzmann of Univ. Grenoble Alpes and Carlo Pierleoni of the University of LAquila. Ceperleys research group is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Computational Materials Sciences program under Award DE-SC0020177.

The rest is here:
Hydrogen's Hidden Phase: Machine Learning Unlocks the Secrets of the Universe's Most Abundant Element - SciTechDaily

Read More..

Unlocking the Investment Potential of S in ESG – AllianceBernstein

The most commonly available social data concern workforce gender diversity, health and safety, product recalls, and human rights policies. But data quality remains sketchy. For example, the fact that a company has a human rights policy doesnt mean it is a good policy or well implemented.

Its also difficult to compare social metrics across companies and industries. Unlike carbon footprints and governance standards, which can be easily compared across companies or sectors, social issues differ across industries.

In the apparel industry, for example, key issues include forced or child labor, the proportion of employees covered by trade unions or bargaining agreements, grievance reporting mechanisms, and supplier codes of conduct.

In retail banking, predatory lending is a big social concern, along with access to services for customers from lower-income backgrounds, privacy and data security, and fines levied for regulatory breaches. Food and beverage companies should be judged on product quality and recalls, investment in safety and quality systems, and production time lost because of workplace injury or safety incidents.

These issues will gain more prominence because official scrutiny of S factors is increasing steadily.

Our research shows that, between 2011 and 2022, key Western governments and quasi-governments took 23 significant actionssuch as introducing legislation or guiding principles and holding parliamentary inquiriesto curb forced labor and human rights abuses. Most actions (17) took place in the second half of that period.

This slow-moving tsunami of legislation will force companies to carry out and report due diligence in their operations and supply chains. Government moves to ban products made with forced labor are already underway in the US and will soon be followed by the European Union.

Grassroots awareness of social issues is becoming more vocal. COVID-19 highlighted the inequality of vaccine distribution and the strain on healthcare, while supply chain disruptions caused by the pandemic and by the war in Ukraine have shed light on challenging conditions in some export-producing countries. Rising inflation and the cost-of-living crisis are increasing public awareness of social issues.

Investors, in our view, should take two steps to accommodate the growing importance of S factors in their portfolios.

The first is to address issues of data quality and availability. Where data are available, their materiality for various industries should be mapped appropriately. Then, data science and qualitative analysis can help drive better insights.

For example, specialized third-party providers may have more in-depth knowledge of S factors than in-house securities analysts, but they typically cover fewer companies. Using data science, investors can access new data sources with the help of artificial intelligence.

Understanding the data is important to avoid drawing false conclusions. S controversies are more common in some industries, such as autos. But dont assume that industries with less data have a correspondingly low level of controversies. Similarly, fundamental research can verify that a companys human rights policy is effective and appropriately implemented.

The second step is to develop a research framework that can identify key S-related risks and opportunities.

We have identified three broad themes to help investors make sense of the evolving S investment environment: a changing world, a just world and a healthy world (Display).

Follow this link:

Unlocking the Investment Potential of S in ESG - AllianceBernstein

Read More..

Perfecting the Data Balancing Act – CDOTrends

What do Hong Kongs business giants have in common? A diverse portfolio of businesses, long history, and large operations. These characteristics also make them less agile to change. But many remain successful because of their ability to disrupt and innovate.

Against the backdrop of the rise of AI, many of them are going through the next phase of transformation. Enterprises are rethinking their core data architectures with data lakehouse platforms as they prepare to ride the oncoming AI wave.

A data lakehouse is the latest data management architecture that combines the flexibility and scalability ofdata lakesin storing structured and unstructured data with the data management and transactions of data warehouses, enabling BI and machine learning (ML) on all data. More about data lakehouse can be found here.

Shaking from the data core

One of them is Li & Fung, a leading global supply chain player. The 117-year-old company is rich with history and talents but also complex. Leo Liu, the chief digital officer of LFX, the digital arm of Li & Fung, recently shared the century-old companys data transformation journey at the recent Data + AI World Tour conference in Hong Kong organized by Databricks.

With over 100 years of supply chain knowledge and experience, LFXs mission is to create and invest in digital ventures that will transform the supply chain and retail industries, said Liu. One key initiative is to modernize our legacy data platform.

He explained the company once acquired 50 different businesses within a year, creating an extremely complicated environment with multiple technology stacks. On top of that, the organization relied on a 10-year-old on-premises data warehouse, slowing down its big data and AI strategy for innovation. AI cannot work without [a modernized] data platform, he said.

Shaking down a data platform from its core for a business giant is not easy. But Liu took the challenge to a new level by looking to complete the data architecture transformation within five months. This includes configuring, setting up a new cloud, data platform and data pipelines, migrating the data and dashboards, training, and launching.

Im still here, so you know we did it, Liu said. The success includes producing three quick wins to demonstrate the value of the new platform, all within the year.

On top of his dedicated and motivated team, Liu said the success was attributed to three primary criteria of the new data platform: open standards; multi-cloud support; fully integrated for data engineers, scientists, and business users. Working with Databricks in building a data lakehouse platform, Liu said the company could now achieve all three criteria and focus on its AI innovation.

We can now develop dashboards and reports within 24 hours; before, it took weeks and months, he said. This year, we are moving forward with our AI strategy.

Beyond legacy

Dealing with legacy is even more challenging for highly-regulated payment players like HSBC and Octopus Card. They are achieving better data governance and predictive modeling while riding on a data lakehouse platform.

As business needs to evolve, there is a growing need for better data analytics and robust data governance to ensure that data provides value and supports our business strategy, said Thomas Qian, wholesale chief data science architect & analytical platform lead at HSBC.

Qian noted one example is the tracking and analysis of users behavior at PayMe, HSBCs mobile payment service. He said the insights on customers usage patterns contributed to the launch of PayMe for Business, a service for merchants to collect payment.

By working with Databricks, we can scale data analytics and machine learning to enable customer-centric use cases, including personalization, recommendations, and fraud detection, he added.

Data governance struggle

Meanwhile, at Octopus Card, data privacy and governance have been top priorities.

We have a very tight data governance policy to protect customer data because we literally hold the data of all Hong Kong people, said Tony Chan, senior data science manager at Octopus Card.

Chan shared the challenge of battling through the stringent governance process to access data. But his team is exploring the use of a data lakehouse platform for easier data governance and more scalable prediction analysis.

He added that they wanted to move from rule-based analysis to AI modeling to detect merchant churn rates. The data lakehouse platform allows his team to scale the analysis on tens of thousands of merchants and predict their churn rate, helping the sales team to prioritize the renewal process.

We hope to slowly transform the users' and senior managements expectations on AI and promote more AI applications, Chan added.

Speedy rebound as borders reopen

The data lakehouse platform is also transforming customer experiences. Swire Properties, a real-estate unit of another Hong Kong-based conglomerate, has recently taken advantage of a data lakehouse platform to drive precision marketing.

I first joined the company as customer relationship management, nothing technical. But I quickly realized that I could do nothing about quality customer engagement without quality data. So, I took the initiative to formulate a data strategy, said Veronica Ho, head of data analytics & insights at Swire Properties.

Part of the strategy was consolidating more than one million data points from 30 different data sources across four business pillars: shopping malls, offices, residential, and hotels. This consolidated data platform became the foundation for developing a predictive model supporting the companys speedy rebound from the post-COVID-19 border reopening.

Ho said by understanding the customers with multi-faceted segmentations, the company can develop hyper-personalized and precision marketing campaigns, like tailored birthday surprises. These precision marketing campaigns allowed the company to reach and engage seven times more members.

For members across the border, data-driven marketing also allowed the team to identify and develop personalized offers to 60% of high-potential members who went dormant during the pandemic.

Like Swire Properties, Octopus Card, HSBC, and Li & Fung, unifying the data platform to drive data integrity and governance is only the first step towards realizing their AI strategies. More data-forward business giants are harnessing the value of their data and applying AI through the lakehouse platform to transform into business legends.

As pioneers of the data lakehouse, we are passionate about making data and AI accessible to everyone, said Jia Woei Ling, managing director for North Asia at Databricks.

Sheila Lam is the contributing editor of CDOTrends. Covering IT for 20 years as a journalist, she has witnessed the emergence, hype, and maturity of different technologies but is always excited about what's next. You can reach her at[emailprotected].

Image credit: iStockphoto/Orla

Read more here:

Perfecting the Data Balancing Act - CDOTrends

Read More..

Causal Bayesian machine learning to assess treatment effect heterogeneity by dexamethasone dose for patients with … – Nature.com

This is a post hoc exploratory analysis of the COVID STEROID 2 trial7. It was conducted according to a statistical analysis plan, which was written after the pre-planned analyses of the trial were reported, but before any of the analyses reported in this manuscript were conducted (https://osf.io/2mdqn/). This manuscript was presented according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist12, with Bayesian analyses reported according to the Reporting of Bayes Used in clinical STudies (ROBUST) guideline13.

HTE implies that some individuals respond differently, i.e., better or worse, than others who receive the same therapy due to differences between individuals. Most trials are designed to evaluate the average treatment effect, which is the summary of all individual effects in the trial sample (see supplementary appendix for additional technical details). Traditional HTE methods examine patient characteristics one at a time, looking to identify treatment effect differences according to individual variables. This approach is well known to be limited as it is underpowered (due to adjustment for multiple testing) and does not account for the fact that many characteristics under examination are correlated and may have synergistic effects. As a result, more complex relationships between variables that better define individuals, and thus may better inform understanding about the variations in treatment response, may be missed using conventional HTE approaches. Thus, identifying true and clinically meaningful HTE requires addressing these data and statistical modeling challenges. BART is inherently an attractive method for this task, as the algorithm automates the detection of nonlinear relationships and interactions hierarchically based on the strength of the relationships, thereby reducing researchers discretion when analyzing experimental data. This approach also avoids any model misspecification or bias inherent in traditional interaction test procedures. BART can also be deployed, as we do herein, within the counterfactual framework to study HTE, i.e., to estimate conditional average treatment effects given the set of covariates or potential effect modifiers11,14,15, and has shown superior performance to competing methods in extensive simulation studies16,17. These features make BART an appealing tool for trialists to explore HTE to inform future confirmatory HTE analyses in trials and hypothesis generation more broadly. Thus, this analysis used BART to evaluate the presence of multivariable HTE and estimate conditional average treatment effects among meaningful subgroups in the COVID STEROID 2 trial.

The COVID STEROID 2 trial7 was an investigator-initiated, international, parallel-group, stratified, blinded, randomized clinical trial conducted at 31 sites in 26 hospitals in Denmark, India, Sweden, and Switzerland between 27 August 2020 and 20 May 20217,18. The trial was approved by the regulatory authorities and ethics committees in all participating countries.

The trial enrolled 1000 adult patients hospitalized with COVID-19 and severe hypoxemia (10 L oxygen/min, use of non-invasive ventilation (NIV), continuous use of continuous positive airway pressure (cCPAP), or invasive mechanical ventilation (IMV)). Patients were primarily excluded due to previous use of systemic corticosteroids for COVID-19 for 5 or more days, unobtainable consent, and use of higher-dose corticosteroids for other indications than COVID-194,17. Patients were randomized 1:1 to dexamethasone 12mg/d or 6mg/d intravenously once daily for up to 10days. Additional details are provided in the primary protocol and trial report7,18.

The trial protocol was approved by the Danish Medicines Agency, the ethics committee of the Capital Region of Denmark, and institutionally at each trial site. The trial was overseen by the Collaboration for Research in Intensive Care and the George Institute for Global Health. A data and safety monitoring committee oversaw the safety of the trial participants and conducted 1 planned interim analysis. Informed consent was obtained from the patients or their legal surrogates according to national regulations.

We examined two outcomes: (1) DAWOLS at day 90 (i.e., the observed number of days without the use of IMV, circulatory support, and kidney replacement therapy without assigning dead patients the worst possible value), and (2) 90-day mortality. Binary mortality outcomes were used to match the primary trial analysis; time-to-event outcomes also generally tend to be less robust for ICU trials19. We selected DAWOLS at day 90 in lieu of the primary outcome of the trial (DAWOLS at day 28) and to align with other analyses of the trial which sought to examine outcomes in a longer term. Both outcomes were assessed in the complete intention-to-treat (ITT) population, which was 982 after the exclusion of patients without consent for the use of their data7. As the sample size is fixed, there was no formal sample size calculation for this study.

While BART is a data-driven approach that can scan for interdependent relationships among any number of factors, we only examined heterogeneity across a pre-selected set of factors deemed to be clinically relevant by the authors and members of the COVID STEROID 2 trial Management Committee. The pre-selected variables that were included in this analysis are listed below with the scale used in parentheses. Continuous covariates were standardized to have a mean of 0 and a standard deviation of 1 prior to analysis. Detailed variable definitions are available in the study protocol18.

participant age (continuous),

limitations in care (yes, no),

level of respiratory support (open system versus NIV/cCPAP versus IMV)

interleukin-6 (IL-6) receptor inhibitors (yes, no),

use of dexamethasone for up to 2days versus use for 3 to 4days prior to randomization,

participant weight (continuous),

diabetes mellitus (yes, no),

ischemic heart disease or heart failure (yes, no),

chronic obstructive pulmonary disease (yes, no), and,

immunosuppression within 3months prior to randomization (yes, no).

We evaluated HTE on the absolute scale (i.e., mean difference in days for the number of DAWOLS at day 90 and the risk difference for 90-day mortality). The analysis was separated into two stages14,20,21,22. In the first stage, conditional average treatment effects were estimated according to each participants covariates using BART models. The DAWOLS outcome was treated as a continuous variable and analyzed using standard BART, while the binary mortality outcome was analyzed using logit BART. In the second stage, a fit-the-fit approach was used, where the estimated conditional average treatment effects were used as dependent variables in models to identify covariate-defined subgroups differential treatment effects. This second stage used classification and regression trees models23, where the maximum depth was set to 3 as a post hoc decision to aid interpretability. As the fit-the-fit reflects estimates from the BART model, the resulting overall treatment effects (e.g., risk difference) vary slightly from the raw trial data.

BART models are often fit using a sum of 200 trees and specifying a base prior of 0.95 and a power prior of 2, which penalize substantial branch growth within each tree15. Although these default hyperparameters tend to work well in practice, it was possible they were not optimal for this data. Thus, the hyperparameters were evaluated using tenfold cross-validation, comparing predictive performance of the model under 27 pre-specified possibilities, namely every combination of power priors equal to 1, 2, or 3, base priors equal to 0.25, 0.5, or 0.95, and number of trees equal to 50, 200, or 400. The priors corresponding to the lowest cross-validation error were used in the final models. Each model used a Markov chain Monte Carlo procedure consisting of 4 chains that each had 100 burn-in iterations and a total length of 1100 iterations. Posterior convergence for each model was assessed using the diagnostic procedures described in Sparapani et al.24. Model diagnostics were good for all models. All parameters seemed to converge within the burn-in period and the z-scores for Gewekes convergence diagnostic25 were approximately standard normal. All BART models were fit using R statistical computing software v. 4.1.226 with the BART package v. 2.924, and all CART models were fit using the rpart package v. 4.1.1627.

The analysis was performed under the ITT paradigm; compliance issues were considered minimal. As in the primary analyses of the trial, the small amount of missing outcome data was ignored in the primary analyses. Sensitivity analyses were performed under best/worst- and worst/best-case imputation. For best/worst-case imputation, the entire estimation procedure was repeated after setting all missing mortality outcome data in the 12mg/d group to alive at 90days and all missing mortality outcome data in the 6mg/d group to dead at 90days. Then, all days with missing life support data were set to alive without life support for the 12mg/d group and the opposite for the 6mg/d group. Under worst/best-case imputation, the estimation procedure was repeated under the opposite conditions, e.g., setting all missing mortality outcome data in the 12mg/d group to dead at 90days and all missing mortality outcome data in the 6mg/d group to alive at 90days.

The resulting decision trees from each fit-the-fit analysis described above (one for the 90-day mortality outcome, and one for the 90-day DAWOLS outcome) were outputted (with continuous variables de-standardized, i.e., back-translated to the original scales). Likewise, the resulting decision trees for each outcome after best- and worst-case imputation were outputted for comparison with the complete records analyses. All statistical code is made available at https://github.com/harhay-lab/Covid-Steroid-HTE.

View post:
Causal Bayesian machine learning to assess treatment effect heterogeneity by dexamethasone dose for patients with ... - Nature.com

Read More..

We have joined Turning University Network – Mirage News

We have further strengthened our ties to The Alan Turing Institute and our connections with other top-ranking universities by becoming a member of the Institutes newly launched Turing University Network.

The Alan Turing Institute is the national institute for data science and artificial intelligence and the Turing University Network forms part of its new strategy aimed at using data science and Artificial Intelligence (AI) for social good.

Our research in data science and AI is recognised as world leading and our researchers are tackling real-world challenges.

Last year, we were named among the first-ever successful applicants to The Alan Turing Institutes Network Development Awards, in recognition of our proven research excellence in data science and AI.

We have a significant research portfolio in data science and AI that supports pioneering multi-disciplinary data science research.

Notable research carried out at our globally outstanding University includes innovative sensing technology for self-driving cars, next generation airport scanners, automated visual surveillance and using the ExaHyPE simulation engine to power the oneAPI programming model.

Additionally, we are expanding the Turing portfolio in areas such as AI in education, digital theology and digital humanities.

Over the years, our Computer Science Department has expanded with more AI-facing research groups, such as the Artificial Intelligence and Humans Systems group (AIH); Scientific Computing (SciComp); Vision, Imaging and Visualisation (VIViD); beside the original Algorithms and Complexity group (ACiD).

Recently, our strength in the AI in Education area was recognised, with Durham being selected to host two major international conferences: EDM22 and the CORE-A AIED22; as well as co-hosting the Turing Artificial Intelligence in Education Event22.

As a research-intensive university, we are heavily involved in important regional initiatives such as the N8 Research Partnership, representing the eight most research-intensive universities in the North of England.

Within the N8 Centre of Excellence in Computationally Intensive Research (N8CIR), we visibly support the high-performance computing community, by hosting the BEDE supercomputer and key research areas of Digital Humanities, Digital Health, and recently Machine Learning.

Being a member of the Turning University Network will enable us to engage and collaborate with other UK universities with an interest in data science and AI both within The Alan Turing Institute and its broader networks.

Go here to read the rest:

We have joined Turning University Network - Mirage News

Read More..

ASU graduate has 4 majors, 2 minors, 3 certificates and long list of … – ASU News Now

April 24, 2023

Editor's note: This story is part of a series of profiles of notable spring 2023 graduates.

Anusha Natarajan has made a splash at Arizona State University as a leader, diligent student and involved community member. Anusha Natarajan Download Full Image

She has been featured in ASU News previously for being a Killam Fellow, being selected for the Henry Clay Center College Student Congress and for winning the 2022 John Lewis Youth Leadership Award from the Arizona Secretary of State's Office. However, this barely scratches the surface of what Natarajan accomplished while at ASU.

Natarajan is graduating this semester with four concurrent bachelors degrees in history, political science, sociology and applied quantitative science, along with minors in Spanish and geography and certificates in international studies, political economy and social science research methods.

I initially started off as a business major, but I realized there was not as much flexibility in the school with all of the interests that I have, so I decided to major in sociology and history, Natarajan said. Eventually, I became interested in wanting to strengthen my quantitative background, so I added the applied quantitative science degree and social science research methods certificate to get more proficient in that.

On top of her studies, she was a journalist for State Press,a research fellow for the Center on the Future of War, anda student representative on theCivic Engagement Coalition. She alsoworked within Changemaker Central at ASUand was elected to the Barrett Honors College Council.

Additionally, as a student she started an organization called Culture Talk, which seeks to educate the larger community about culture and she was the editor-in-chief and co-founder of the School of Historical, Philosophical and Religious Studies Digital Humanities Journal, an online journal for ASU students to publish their research in history, philosophy and religious studies.

Outside of campus, Natarajan serves on the student advisory board at Campus Vote Project and is involved with Girl Up, a leadership development initiative focusing on equity for girls and women in spaces where they are unheard or underrepresented.

She is also the winner of 14 scholarships and awards, including the Spirit of Service Scholarship, the Lily K. Sell Global Experience Scholarship and a PULSE Scholarship.

My time at ASU has been a great way for me to learn how I can combine different fields together, whether it be through research or my academic experience, said Natarajan.

We caught up with her to discuss her time at ASU, her advice for current students and her plans for the future.

Questions: What was your aha moment when you realized you wanted to study the fields you majored in?

Answer: I would say close to junior year when I decided that I wanted to add on the other degrees to become more proficient in data analysis and other data collection methods. I took some statistics classes in my junior and senior year that made me realize the importance of having data in our lives and how to make that relatable to social issues, like economic inequality or misinformation. Data is needed now more than ever in the social sciences, especially in our ever-changing world.

Q: Whats something you learned while at ASU in the classroom or otherwise that surprised you or changed your perspective?

A: During my time at the State Press, I was able to learn about how ASU has been harnessing its charter to build partnerships with the State Department and other big companies to make education more accessible and open to the world. I was able to learn how ASU values the importance of universal learning through my reporting work on ASUs partnership with Crash Course and creating the ASU for You platform during the pandemic. Universal learning is a process where we continue to learn, and I like how ASU provides opportunities for academic enrichment regardless of where one might be in life.

Q: Why did you choose ASU?

A: I chose ASU because of the academic opportunities, specifically Barrett, The Honors College, and extracurricular activities, like the State Press. I also liked how my college experience has gotten me ready for the professional and academic world, especially when it came to getting involved with the various research opportunities during my time here. I like the focus that ASU has with research, and I have been able to get a lot from that in my academic and extracurricular experiences.

Q: Which professor taught you the most important lesson while at ASU?

A: This is kind of general, but going to my professors' office hours has been really great towards my planning for the future because I have the opportunity to get to know them one on one. All of my professors have taught me about the importance of office hours, and they are important because you take a lot of information away, especially when it comes to an assignment or your next step after undergrad.

Q: Whats the best piece of advice you would give to students?

A: I would say get involved on campus. You will be able to find a lot of opportunities for growth. You meet people from outside of your major, and you also gain a lot of professional skills that the classroom might not give you, especially leadership skills. It is also a great way to start building your networking skills because that will be important after graduation.

Q: What was your favorite spot on campus, whether for studying, meeting friends or just thinking about life?

A: I really like the Hayden Library, especially the reading room on the first floor. I like it because it is quiet and also it's nice to see people moving around and about throughout the day. I also like the wide-ranging genres for books that are available for students to continue learning.

Q: What are your plans after graduation?

A: I plan to enroll in a data science program either at Columbia or Vanderbilt to strengthen my quantitative background en route to a PhD program to further research about comparative election misinformation.

Q: If someone gave you $40 million to solve one problem on our planet, what would you tackle?

A: I would love to tackle education equality as many people around the world still dont have access to it. I would invest resources in building scholarships for underrepresented women globally for them to get funding for their higher education and also investing resources for textbooks, paper and appropriate technologies to ensure that schools are properly equipped to teach their students and for teachers to feel confident and prepared in teaching. The COVID-19 pandemic has caused serious gaps in our education proficiency, and I want to ensure that future generations dont suffer from those setbacks.

Read more:

ASU graduate has 4 majors, 2 minors, 3 certificates and long list of ... - ASU News Now

Read More..

Machine Learning as a Service Market Size Growing at 37.9% CAGR Set to Reach USD 173.5 Billion By 2032 – Yahoo Finance

Acumen Research and Consulting

Acumen Research and Consulting recently published report titled Machine Learning as a Service Market Forecast, 2023 - 2032

TOKYO, April 24, 2023 (GLOBE NEWSWIRE) -- The Global Machine Learning as a Service Market Size accounted for USD 7.1 Billion in 2022 and is projected to achieve a market size of USD 173.5 Billion by 2032 growing at a CAGR of 37.9% from 2023 to 2032.

Machine Learning as a Service Market Research Report Highlights and Statistics:

The Global Machine Learning as a Service Market Size in 2022 stood at USD 7.1 Billion and is set to reach USD 173.5 Billion by 2032, growing at a CAGR of 37.9%

MLaaS allows users to access and utilize pre-built algorithms, models, and tools, making it easier and faster to develop and deploy machine learning applications.

Adoption of cloud-based technologies, the need for managing the huge amount of data generated, and the rise in demand for predictive analytics and natural language processing are driving the growth of the Machine Learning as a Service market.

North America is expected to hold the largest market share in the Machine Learning as a Service market due to the presence of large technology companies and the increasing demand for advanced technologies in the region.

Some of the key players in the Machine Learning as a Service market include Amazon Web Services, IBM Corporation, Google LLC, Microsoft Corporation, and Oracle Corporation.

Request For Free Sample Report @ https://www.acumenresearchandconsulting.com/request-sample/385

Machine Learning as Service Market Report Coverage:

Market

Machine Learning as a Service Market

Machine Learning as a Service Market Size 2022

USD 7.1 Billion

Machine Learning as a Service Market Forecast2032

USD 173.5 Billion

Machine Learning as a Service Market CAGR During 2023 - 2032

37.9%

Machine Learning as a Service Market Analysis Period

2020 - 2032

Machine Learning as a Service Market Base Year

2022

Machine Learning as a Service Market Forecast Data

2023 - 2032

Segments Covered

By Component, By Application, By Organization Size, By End-Use Industry, And ByGeography

Metabolomics Market Regional Scope

North America, Europe, Asia Pacific, Latin America, and Middle East & Africa

Key Companies Profiled

Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), IBM Watson, Oracle Cloud, Alibaba Cloud, SAS, PREDICTRON labs LTD, FICO, and HEWLETT Packard Enterprise

Report Coverage

Market Trends, Drivers, Restraints, Competitive Analysis, Player Profiling, Regulation Analysis

Machine Learning as a Service Market Overview:

The increasing adoption of cloud-based technologies and the need for managing the enormous amount of data generated has led to the rise in demand for MLaaS solutions. MLaaS provides pre-built algorithms, models, and tools, making it easier and faster to develop and deploy machine learning applications. This service is being used in various industries such as healthcare, retail, BFSI, manufacturing, and others.

Story continues

The healthcare industry is using MLaaS for patient monitoring and disease prediction. In retail, MLaaS is being used for personalized recommendations and fraud detection. MLaaS is also being utilized for financial fraud detection, sentiment analysis, recommendation systems, predictive maintenance, and much more.

The Natural Language Processing (NLP) segment is expected to grow rapidly during the forecast period. NLP is being used by organizations to analyze customer feedback, improve customer experience, and automate customer service. MLaaS vendors such as Amazon Web Services, IBM Corporation, Google LLC, Microsoft Corporation, and Oracle Corporation offer various pricing models and features, making the Machine Learning as a Service market competitive.

Trends in the Machine Learning as a Service Market:

Automated Machine Learning (AutoML): The development of AutoML algorithms is reducing the need for expert data scientists to develop machine learning models, allowing non-experts to develop and deploy models with less effort and cost.

Edge Computing: Machine learning models are being deployed on edge devices such as smartphones, IoT sensors, and other devices to reduce latency and improve privacy.

Explainable AI: Machine learning models are becoming more transparent, and algorithms are being developed that can explain how the model arrived at its decisions.

Federated Learning: Machine learning models are being developed to train on data that is distributed across multiple devices, allowing for privacy protection and faster training.

Synthetic Data: Synthetic data is being used to augment training data, reducing the need for large amounts of real data and improving model accuracy.

Time Series Analysis: Machine learning models are being developed to analyze and predict time series data, which is important in industries such as finance and transportation.

Personalization: Machine learning models are being developed to provide personalized recommendations, content, and experiences to users.

Generative Models: Generative models are being developed to create new data based on existing data, which can be used for various applications such as image and text generation.

Machine Learning as a Service Market Dynamics

Increased demand for advanced analytics: Businesses are looking for ways to extract insights from their data to improve decision-making, and MLaaS provides a fast and efficient way to do so.

Quantum Machine Learning: Machine learning algorithms are being developed that can run on quantum computers, which offer significant speed improvements over classical computers.

Interpretable Machine Learning: Machine learning models are being developed to provide interpretable results, allowing users to understand how the model arrived at its decisions.

Reinforcement Learning: Reinforcement learning algorithms are being developed to teach machines how to make decisions based on feedback from their environment.

Multi-Task Learning: Machine learning models are being developed to perform multiple tasks simultaneously, reducing the need for multiple models.

Transfer Learning: Machine learning models are being developed that can transfer knowledge learned from one task to another, reducing the need for large amounts of training data.

Increasing adoption of IoT devices: The growing number of IoT devices is generating massive amounts of data that can be analyzed with machine learning algorithms, driving demand for MLaaS services.

Speech Recognition: Machine learning models are being developed that can accurately recognize speech, which is important for applications such as virtual assistants and speech-to-text.

Low barriers to entry: MLaaS provides a low barrier to entry for businesses that want to incorporate machine learning into their operations but lack the resources to do so in-house.

Explainable Deep Learning: Deep learning models are being developed that can provide interpretable results, allowing users to understand how the model arrived at its decisions, which is important for applications such as healthcare and finance.

Growth Hampering Factors in the Market for Machine Learning as a Service:

Concerns about data security and privacy: Many businesses are hesitant to use MLaaS due to concerns about data security and privacy, which may hamper the growth of the market.

Complexity of machine learning models: Developing and deploying machine learning models can be complex, which may limit the adoption of MLaaS by businesses.

Limited interpretability of machine learning models: Many machine learning models are not easily interpretable, which may make it difficult for businesses to understand the underlying logic and decision-making process of these models.

Limited availability of training data: Machine learning models require large amounts of high-quality training data, and if this data is not available, it may limit the ability of businesses to develop accurate models.

Cost: MLaaS can be expensive, especially for small and medium-sized businesses, which may limit adoption.

Lack of trust in machine learning models: If businesses do not trust the accuracy and reliability of machine learning models, they may be hesitant to adopt MLaaS.

Check the detailed table of contents of the report @

https://www.acumenresearchandconsulting.com/table-of-content/machine-learning-as-a-service-mlaas-market

Market Segmentation:

By Type of component

By Application

Security and surveillance

Augmented and Virtual reality

Marketing and Advertising

Fraud Detection and Risk Management

Predictive analytics

Computer vision

Natural Language processing

Other

By Size of Organization

End User

Retail

BFSI

Healthcare

Public sector

Manufacturing

IT and Telecom

Energy and Utilities

Aerospace and Defense

Machine Learning as a Service Market Overview by Region:

North Americas Machine Learning as a Service market share is the highest globally, due to the high adoption of cloud computing and the presence of several major players in the region. The United States is the largest market for MLaaS in North America, driven by the increasing demand for predictive analytics, the growing use of deep learning, and the rising adoption of artificial intelligence (AI) across various industries. For instance, companies in the healthcare sector are using MLaaS for predicting patient outcomes, and retailers are using it to analyze customer behavior and preferences to deliver personalized experiences.

The Asia-Pacific regions Machine Learning as a Service Market share is also huge and is growing at the fastest rate, due to the increasing adoption of cloud computing, the growth of IoT devices, and the rise of e-commerce in the region. China is the largest market for MLaaS in the Asia Pacific region, with several major companies investing in AI and machine learning technologies. For example, Alibaba, the largest e-commerce company in China, is using MLaaS for predictive analytics and recommendation engines. Japan is another significant market for MLaaS in the region, with companies using it for predictive maintenance and fraud detection.

Europe is another key market for Machine Learning as a Service, with countries such as the United Kingdom, Germany, and France driving growth in the region. The adoption of MLaaS in Europe is being driven by the growth of e-commerce and the increasing demand for personalized experiences. For example, companies in the retail sector are using MLaaS to analyze customer data and make personalized product recommendations. The healthcare sector is also a significant user of MLaaS in Europe, with providers using it for predictive analytics and diagnosis.

The MEA and South American regions have a growing Machine Learning as a Service market share, however it is expected to grow at a steady pace.

Buy this premium research report

https://www.acumenresearchandconsulting.com/buy-now/0/385

Machine Learning as a Service Market Key Players:

Some of the major players in the Machine Learning as a Service market include Amazon Web Services, Google LLC, IBM Corporation, Microsoft Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Development LP, Fair Isaac Corporation (FICO), Fractal Analytics Inc., H2O.ai, DataRobot, Alteryx Inc., Big Panda Inc., RapidMiner Inc., SAS Institute Inc., Angoss Software Corporation, Domino Data Lab Inc., TIBCO Software Inc., Cloudera Inc., and Databricks Inc. These companies offer a wide range of MLaaS solutions, including predictive analytics, machine learning algorithms, natural language processing, deep learning, and computer vision.

Browse More Research Topic on Technology Industries Related Reports:

More:
Machine Learning as a Service Market Size Growing at 37.9% CAGR Set to Reach USD 173.5 Billion By 2032 - Yahoo Finance

Read More..