Page 1,779«..1020..1,7781,7791,7801,781..1,7901,800..»

W&M explores creation of a computing and data science school – College of William and Mary

William & Mary is exploring the possibility of establishing a new academic unit in computing and data science, Provost Peggy Agouris told members of the Board of Visitors on Thursday.

The effort springs from a surge in student interest in applied science, computer science and data science at William & Mary, and a commitment from the university in its strategic plan to support anticipated needs in the Virginia workforce.

To meet anticipated growth, Agouris has formed an exploratory design team with representatives from all five W&M schools, while three core departments are working to develop a model for the proposed academic unit, which could potentially be a separate school.

Its critical to evaluate how these growing units can be best organized because it can have serious implications on our ability to provide resources for the education that W&M is offering across disciplines and to attract and expand key partnerships said Agouris, who presented the effort during the Board of Visitors Committee on Academic Affairs meeting in the W&M Alumni House. The right organizational structure can re-imagine our value in the computational and data space. It can foster important relationships at the state and federal levels, with other institutions, with friends and donors, and with like-minded organizations that might be new partners to us. Its my hope that it deepens our strengths and expands our horizons.

The university has experienced an explosion of interest in the computational sciences in recent years, and computational skills are also increasingly used throughout other disciplines. Over the last 10 years, interest in computational fields has more than tripled at W&M, going from 211 declared majors in just two fields (computer science and math) to 738 in six (computer science, data science, math, computational and applied mathematics and statistics, business analytics data science, and business analytics supply chain).

The growth in those fields reflects an overall increase in student interest in STEM fields at W&M. From 2011 to 2022, the number of graduates in STEM disciplines at W&M has more than doubled, growing from 284 to 693. Looking just at the past two years, the number of computer science degrees that the university conferred went from 78 to 93. In the data science program, which just began in 2020, the number of degrees conferred went from eight in 2021 to 35 in 2022.

At the same time, data has become increasingly important to the university overall. With data as one four initiatives outlined in the Vision 2026 strategic plan, William & Mary has committed to expanding its presence and influence in computational and data sciences consistent with student demand and Virginia workforce needs.

This school represents an opportunity to boldly grow the community of William & Mary in new directions, serve new student populations and showcase the incredible talent of our teachers and researchers to new domestic and international audiences, said Dan Runfola, assistant professor of applied science. By integrating our computational activities into a new unit, we recognize the unique challenges and opportunities these rapidly evolving fields present and gain the ability to nimbly respond to new opportunities without disrupting our ability to offer a world class liberal arts education.

Formal discussions about a possible computing and data science unit at W&M started in spring 2022 and developed organically, Agouris said, with faculty members initially raising the idea. After an ad hoc design team with representatives from the universitys arts and sciences, business, education, law and marine science was formed to explore the possibilities, its members began conducting research on similar structures at other universities and considering what might make sense for William & Mary.

Faculty leaders from the departments of computer science, applied science and the data science program are now working on drafting a model based on that research. This semester, the model will be refined as feedback is received from various stakeholders, including the Faculty Assembly.

The model and action plan are expected to be finalized in the spring, with a goal of submitting them to the Board of Visitors and the State Council on Higher Education in Virginia in the fall of 2023.

The exploratory effort is part of William & Marys continued work to increase its offerings in the computational sciences as career opportunities and student interest grow.

Currently, the university offers bachelors, masters and doctoral degrees in computer science as well as a computer science minor. In 2020, W&M began offering a bachelors degree in data science, and subsequently created the popular Jump Start Data Science summer program that can lead to an accelerated minor. The Department of Applied Science has a well-established doctoral program that also offers a data science concentration. Applied science also offers an undergraduate minor and masters degree options.

Increasing the number of students with data science and computational skills is also a focus of the federal and state government. In 2019, the university joined the commonwealths Tech Talent initiative, which seeks to increase the number of Virginians with computer science-related degrees. The Tech Talent Investment Program provides funding to participating Virginia universities and colleges to help expand that tech talent pipeline.

While preparing interested students to enter that pipeline is one of the key drivers for exploring a new computing and data science unit at W&M, Agouris said it is all still in the early phases and that the university is doing its due diligence in seeing what might be the best fit for the university.

We want to make sure this makes sense for our university based on the growth we are experiencing, the associated demands, and also what we are hearing from our academic community said Agouris.

Staff, University News & Media

Original post:

W&M explores creation of a computing and data science school - College of William and Mary

Read More..

Torqata to Host ‘Reinvent the Wheel’ Hackathon 2.0 – Tire Review

Torqata announced that it, in partnership with sponsors American Tire Distributors (ATD), Google Cloud, and Continental, will host the Reinvent the Wheel Hackathon 2.0 on Nov. 11, at ATDs headquarters in Huntersville, North Carolina. The 24-hour competition invites college students and professionals specializing in data science at all experience levels to develop sustainable solutions that reduce the environmental impact of the tire and automotive aftermarket industry.

The inaugural Reinvent the Wheel Hackathon, held in 2019, attracted 85 participants across 17 teams. After postponements due to the pandemic, Torqata says this years hackathon returns with cash prizes, a newly redesigned venue at ATD Headquarters and perks on-site. During the event, the data science and engineering minds will be organized into 20 teams and challenged to drive sustainability in the automotive aftermarket by optimizing pick-up logistics for recycled tire feedstock.

While all applications are welcome, Torqata says the challenge requires skills in coding (R/Python), data visualization, data science, and some basic programming knowledge. After 24 hours of coding, mini-challenges, and giveaways, each team will pitch its solution to a panel of judges. The top three teams will be selected, followed by an awards presentation for the winners with local Charlotte businesses, industry leaders and media in attendance.

During the first hackathon, we were blown away by the immense talent from even the youngest participants, said Tim Eisenmann, Chief Executive Officer of Torqata. Climate change is an undeniable reality, and these data scientists are in a unique position to help solve perhaps one of the biggest problems our industry faces.

Torqata is seeking applications from teams and individuals who want to be added to teams through Saturday, Oct. 1. Participants can apply here.

More here:

Torqata to Host 'Reinvent the Wheel' Hackathon 2.0 - Tire Review

Read More..

How to build an effective DataOps team – TechTarget

A DataOps strategy is heavily reliant on collaboration as data flows between managers and consumers throughout the business. Collaboration is essential to DataOps success, so it's important to start with the right team to drive these initiatives.

It's natural to think of DataOps as simply DevOps for data -- not quite. It would be more accurate to say that DataOps is trying to achieve for data what DevOps achieves for coding: a dramatic improvement in productivity and quality. However, DataOps has some other problems to solve, in particular how to maintain a mission-critical system in continuous production.

The distinction is important when it comes to thinking about putting together a DataOps team. If the DevOps approach is a template, with Product Managers, Scrum Masters and Developers, the focus will end up on delivery. DataOps also needs to focus on continuous maintenance and requires some other frameworks to work with.

One key influence on DataOps has been Lean manufacturing techniques. Managers often use terms taken from the classic Toyota Production System, which has been much studied and imitated. There're also terms like data factory when talk starts about data pipelines in production.

This approach requires a distinctive team structure. Let's first look at some roles within a DataOps team.

The roles described here are for a DataOps team deploying data science in mission-critical production.

What about teams who are less focused on data science? Do they need DataOps, too, for example, for a data warehouse? Certainly, some of the techniques may be similar, but a traditional team of extract, transform and load (ETL) developers and data architects is probably going to work well. A data warehouse, by its nature, is less dynamic and more constant than an Agile pipelined data environment. The following DataOps team roles handle the rather more volatile world of pipelines, algorithms and self-service users.

Nevertheless, DataOps techniques are becoming more relevant as data warehouse teams push to be ever more Agile, especially with cloud deployments and data lakehouse architectures.

Let's start with defining the roles required for these new analytics techniques.

Data scientists do research. If an organization knows what they want and they just need someone to implement a predictive process, then get a developer who knows their way around algorithms. The data scientist, on the other hand, explores for a living, discovering what is relevant and meaningful as they do.

In the course of exploration, a data scientist may try numerous algorithms, often in ensembles of diverse models. They may even write their own algorithms.

The DataOps team can make the difference between an enterprise who occasionally does cool things with data and an enterprise that runs efficiently and reliably on data, analytics and insight.

The key attributes for this role are restless curiosity and an interest in the domain, as well as technical insight -- especially in statistics -- to understand the significance of what they discover and the real-world impact of their work.

This diligence matters. It is not enough to find one good model and stop there because business domains rapidly evolve. Also, while everyone may not work in areas with compelling ethical dilemmas, data scientists in every domain sooner or later come across issues of personal or commercial privacy.

This is a technical role, but don't overlook the human side, especially if the organization is only hiring one data scientist. A good data scientist is a good communicator that is able to explain findings to a nontechnical audience, often executives, while being straightforward about what is and is not possible.

Finally, the data scientist, especially one working in a domain which is new to them, is unlikely to know all the operational data sources -- ERP, CRM, HR systems and so on -- but they certainly need to work with the data. In a well-governed system, they may not have direct access to all the unprocessed data of an enterprise. They need to work with other roles who understand the source systems better.

Generally, it is the data engineer who moves data between operational systems and the data lake -- and, from there, between zones of the lake such as raw data, cleansed and production areas.

The data engineer also supports the data warehouse, which can be a demanding task in itself as they must maintain history for reporting and analysis while providing for continuous development.

At one time, the data engineer may have been called a data warehouse architect or ETL developer, depending on their expertise. But data engineer is the new term of art, and it captures better the operational focus of the role in DataOps.

Another engineer? Yes and one focused on operations. But the DataOps engineer has a different area of expertise: supporting the data scientist.

The data scientist's skills focus on modeling and deriving insight from data. However, it is common to find that what works well on the workbench can be difficult or expensive to deploy into production. Sometimes, an algorithm runs too slowly against a production data set but also uses too much compute or storage to scale effectively. The DataOps engineer helps here by testing, tweaking and maintaining models for production.

As part of this, the DataOps engineer knows how to keep a model scoring accurately enough over time as data drifts. They also know when to retrain the model or reconceptualize it, even if that work falls to the data scientist.

The DataOps engineer keeps models running within budget and resource constraints that they likely understand better than anyone else on the team.

In a modern organization, the data analyst may have a wide range of skills, ranging from technical knowledge to aesthetic understanding of visualization to so-called soft skills, such as collaboration. They are also less likely to have had much technical training compared to, say, a database developer.

Their data ownership -- and influence -- may depend less on where they sit in the organizational hierarchy and more on their personal commitment and their willingness to take ownership of a problem.

These people are in every department. Look around. Someone is "the data person," who, regardless of job title, knows where the data is, how to work with it and how to present it effectively.

To be fair, this role is becoming more formalized today, but there are still a large number of data analysts who have grown into the role from a business rather than technical background.

Is the executive sponsor a member of the team? Perhaps not directly, but the team won't get far without one. A C-level sponsor can be critical for aligning the specific work of a DataOps team with the strategic vision and the tactical decisions of the enterprise. They can also ensure the team has budget and resources with long-term goals in mind.

Few organizations can, or will, immediately stand up a team of four or more just for DataOps. The capabilities and value of team must grow over time.

How, then, should a team grow? Who should be the first hire? It all depends on where the organization is starting from. But there needs to be an executive sponsor from day zero.

It is unlikely the team is starting from scratch. Organizations need DataOps precisely because they already have work in progress that needs to be better operationalized. They may have started to look at DataOps because they have data scientists stretching the boundaries of what they can manage today.

If so, the first hire should be a DataOps engineer because it is their role to operationalize data science and make it manageable, scalable and comprehensive enough to be mission-critical.

On the other hand, it is possible an organization has a traditional data warehouse, and there are data engineers involved and data analysts downstream from them. In this case, the first DataOps team position would be a data scientist for advanced analysis.

An important question is whether to create a formal organization or a virtual team. This is another important reason for the executive sponsor, who may have a lot of say in the answer. Many DataOps teams start as virtual groups who work across organizational boundaries to ensure data and data flow are reliable and trustworthy.

Whether loosely or tightly organized, these discrete disciplines grow in strength and impact over time, and their strategic direction and use of resources will cohere into a consistent framework for exploration and delivery. As this happens, the organization can add more engineering for scale and governance and more scientists and analysts for insight. At this point, wherever the organization started, the team is likely to become more formally organized and recognized.

It's an exciting process. The DataOps team can make the difference between an enterprise that occasionally does cool things with data and an enterprise that runs efficiently and reliably on data, analytics and insight.

Continue reading here:

How to build an effective DataOps team - TechTarget

Read More..

AI Skills Crisis May Lead to Wasted Investments and Stifled Innovation: SAS – Datanami

CARY, N.C.,Sept. 23, 2022 Urgent action is needed to tackle an artificial intelligence (AI)skills crisis that is already stifling US productivity and innovation, new research has found.Published by analytics leader SAS,How to Solve the Data Science Skills Shortageis a report based on a survey of decision makers from major US firms spanning nine sectors, including banking, insurance, government and retail.

Fortune Business Insights projects the global artificial intelligence market to grow from$387 billionin 2022 to nearly$1.4 trillionby 2029.1Correspondingly, AI andmachine learningare top investment priorities over the next one to two years, according to 43% of SAS survey respondents. That is well ahead of data technology stalwarts such as data visualization (25%), data analytics (22%) and big data (17%).

But there is a massive red flag: Sixty-three percent of respondents also claim their largest skills shortages are in AI and machine learning.

No Easy Answers on How to Bridge the Skills Gap

Without the talent, these increased investments in artificial intelligence and machine learning could be wasted, leading to financial losses and unrealized opportunities. Survey respondents are planning different tacks to address skills gaps, but they cite several challenges.

Three-quarters of respondents want to train and upskill existing staff, compared to 64% who want to recruit new talent. Training and upskilling may prove more cost-effective compared to hiring and using contractors but the study highlighted barriers such as lack of time and motivation, and belief that senior management may be worried about people taking their skills elsewhere.

Given how fierce the war for talent has become, salary is another sticking point. Companies may have little choice but to pay ever-higher salaries, and recruitment and contractor costs to secure the skills they need. The estimated total pay for a data scientist now stands at around$122,000in the US2, and this may not be sustainable for many organizations.

Universities Still Important, but Companies Looking Beyond Degrees

An April study by Indeed found that 67% of large companies surveyed would consider dropping their degree requirements.3The findings of theHow to Solve the Data Science Skills Shortagereport mirror this trend.

Respondents largely want to work with academic institutions to recruit data talent directly, but understand that relying on graduates alone will not fill vacancies fast enough. And what they are seeking from potential hires is not necessarily a four-year degree. The survey indicates:

Building Data Science Talent Now

Dr.Sally Eaves, AI expert, author and speaker who contributed to theHow to Solve the Data Science Skills Shortagereport, said: Businesses cannot rely solely on graduates or continue the poaching merry-go-round. The good news is employers have already begun to recognize the value of on-the-job training and other certifications as stated in the report.

The report outlines three recommendations to address thedata science skillsgap:

There is no single approach but a combination of expanding mid-career training including to those currently in non-technology roles, equipping people with the right tools for the job and growing the data science community will start to see that skills gap narrow, said Eaves. Together, they could significantly increase the supply of talent, and create good-quality, satisfying jobs that benefit individuals, organizations and the wider economy.

Learn more about how SAS can help an organization buildanalytics skillsto gain a competitive edge.

Methodology

SAScommissioned a survey of 72 US-based decision-makers in organizations spanning nine sectors, including banking, insurance, government and retail. Each worked for organizations with more than 1,000 employees; some had more than 100,000. The vast majority were in technical roles, including data science and data analytics, and just under a quarter were in HR and talent management. The survey was also carried out in the UK andIreland, and conducted by ColemanParkes.

About SAS

SASis the leader in analytics. Through innovative software and services,SASempowers and inspires customers around the world to transform data into intelligence.SASgives you THE POWER TO KNOW.

Notes

1https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-1001142https://www.glassdoor.com/Salaries/data-scientist-salary-SRCH_KO0,14.htm3https://www.indeed.com/lead/report-how-covid-19-pandemic-changed-recruiting?hl=en&co=US

Source: SAS

Original post:

AI Skills Crisis May Lead to Wasted Investments and Stifled Innovation: SAS - Datanami

Read More..

Hackathon Prepares Young Researchers for the Collaborative Science of the Future – UCSF

Bioscience discovery is not a solo pursuit. Confronted with mountains of genomic data, it takes a team of researchers with complementary expertise to glean gemlike insights.

On a sunny Friday, teams of aspiring young scientists gathered in the Clinical Sciences building at Parnassus Heights, looking for treasure in a trillion data points about cancer.

They were competing in a hackathon put on by UC San Franciscos CoLabs Initiative and the Bakar ImmunoX Initiative, two programs that foster collaborative working across disciplines. Each team was given the same data and two days to make something of it before presenting their ideas to the judges and an audience of their peers.

We asked these students, If you want to understand something new about cancer, what are you going to look for in this data?

Max Krummel, PhD

We asked these students, If you want to understand something new about cancer, what are you going to look for in this data? said Max Krummel, PhD, chair of ImmunoX. This was a chance for them to pursue their own questions, with the potential to find some exciting leads.

CoLabs has been hosting annual hackathons since 2019. The events draw participants with diverse perspectives, including clinical fellows, postdoctoral researchers, staff scientists and PhD students. The 2020 hackathon, held virtually, gave participants the chance to work with novel information related to COVID-19 from the UCSF COMET study, which tracked patients hospitalized at UCSF and Zuckerberg San Francisco General Hospital and Trauma Center.

This years teams got to look at some fresh data, generated by Krummel and colleagues as part of a cross-disciplinary study classifying hundreds of patient-donated tumors by their immune signatures.

Data scientist Gabi Fragiadakis, PhD, and ImmunoX administrator Jonathan Wilson organized this years hackathon, which took place on Aug. 18 and 19. They prepared for the event by dividing the 26 participants into five teams with complimentary skills.

Bringing biology and data expertise together makes for the best insights and an environment of mutual learning, said Fragiadakis.

The teams were provided resources on how to use the data analysis tools, which they put to use on the first day of the event, giving participants who were new to number crunching a chance to learn from peers and try their hands.

They looked for patterns in the data, sometimes comparing them with patterns in the tumor data that Krummel and Combes had worked with earlier.

Finally, teams presented the ideas theyd been able to pull together over the previous day and a half. The judges Combes, Karin Pelka, PhD; Mary Helen Barcellos-Hoff, PhD; and Matt Spitzer, PhD mulled over the presentations.

We were really impressed by the amount of work that each team was able to accomplish in such a short time, as well as the diversity of the questions they tried to answer with this rich dataset, said Combes.

The judges wound up tied on their evaluation of the two top teams presentations. The jury was out until Fragiadakis and hackathon trainee volunteers graded the teams on the quality of their computational analysis. In the end, the winning group squeaked out a victory with a project that highlighted differences between lung tumors from smokers and nonsmokers.

Fragiadakis, Krummel, and Combes said the hackathon ideas could offer new insights into cancer treatment and should be followed up.

Beyond furthering research and giving young scientists new skills, the hackathons embody the spirit of CoLabs, which Krummel and Combes see as the future of science.

The CoLabs initiative is both an idea and a dedicated space on the Parnassus campus, designed specifically to make it easier for scientists from different disciplines to work together.

Fragiadakis leads a CoLab that applies data science to immunological problems. Combes directs a CoLab profiling the immune system across a broad array of diseases, and also trains researchers on data analysis techniques. Other CoLabs do cross-disciplinary work focused on imaging, genomics and instrumentation.

This is the direction things are going, said Combes. We see that confirmed in the fact that UCSF is invested in creating the infrastructure for this kind of collaboration.

Combes and Krummel are enthusiastic about a proposed new research building that will be structured with that ethos in mind, expanding the idea and the space of collaboration.

We want to update the culture of science, to be more engaging and more current, Krummel said. This will be a place where the acts of many people together can produce better results, and where we can teach and elaborate a more collaborative culture.

Theres a shared feeling among the community of faculty in CoLabs that mentorship activities like the hackathon can give students experience that will propel them through their careers.

It can transform how they work, he said. And thats vital, because at some point theyll move on to their own labs, and the value of all this information will depend on what they make of it.

Continue reading here:

Hackathon Prepares Young Researchers for the Collaborative Science of the Future - UCSF

Read More..

Bread, backpacks, and bosons a summer at CERN – Yale News

Caitlin Gainey and some of her Yale friends spent the summer in Europe hiking in the mountains, strolling through medieval villages and searching for subatomic particle collisions that few humans have ever seen.

Gainey, a Yale College senior studying astrophysics, along with fellow Yale seniors Dawson Thomas, Matthew Murphy, and Alexandra Haslund-Gourley, conducted critical research at one of the worlds most important physics hubs the Large Hadron Collider at CERN (the European Organization for Nuclear Research), located just outside Geneva, Switzerland. They were part of a science team led by Sarah Demers, a physics professor in Yales Faculty of Arts and Sciences.

The Large Hadron Collider the worlds biggest particle accelerator, located in a giant underground compound restarted in mid-summer after four years of upgrades. Physicists use the facility to test theories about the fundamental laws of physics, from the composition of space and time to the relationship between quantum mechanics and general relativity.

The Yale students job was to analyze test collisions of subatomic particles, look for specific particles such as Z bosons and J/Psi particles, and create visual displays of the collisions. The work involved an intensive amount of physics knowledge, computer coding, and graphics expertise.

An example of the event displays of particle collisions that Yale students created, based on data from CERNs Large Hadron Collider.

Their visit coincided with the 10th anniversary of the discovery of the Higgs boson, a fundamental particle on the order of an electron or quark, a landmark moment celebrated by CERN scientists in July. During the same month, CERN announced the discovery of three new particles a pentaquark and two tetraquarks using a more powerful accelerator beam.

Event displays created by the Yale students, which illustrated specific particle collisions, were a prominent part of the announcement.

It was an exciting time to be at CERN, and these students were in the thick of it, said Demers, a CERN associate research scientist, collaborator with the ATLAS experiment at LHC, and part of the international research team that discovered the Higgs boson (along with fellow Yale physicists Keith Baker and Paul Tipton).

Im incredibly impressed with what they accomplished, she said.

The physics foursome from Yale College arrived in Switzerland in May, boasting a variety of science skills and research interests.

Haslund-Gourley, who hails from Santa Barbara, California, has been passionate about physics since grade school. Shed previously completed a physics internship at the Fermilab facility in suburban Chicago, and she hosts a science podcast, Extended Office Hours, on Spotify.

Thomas, who is from the Atlanta suburbs, studies physics and mathematics, with a special interest in using geometric and topological machine learning methods to explore particle physics.

Gainey, like Haslund-Gourley, grew up in Santa Barbara. Shed already worked at three science labs during her time at Yale two of them were labs focused on astronomy research and one was working in particle physics. Her research interest is in applying data science techniques to both fields.

Murphy, a member of the Yale rowing team from Portland, Oregon, had no previous lab experience before he emailed Demers, his former professor in PHYS 200, to ask about research opportunities in her campus lab. She told him she could do that or he could just come to Switzerland.

This was my first time doing research, Murphy said. I didnt know what to expect.

They arrived at their rented apartment outside Geneva during a driving rainstorm the third week of May. Their digs were just a 10-minute train ride to CERN.

With Demers guiding the way, they quickly got to work.

The Yale undergrads entered the CERN scene amid a wave of activity. The Large Hadron Collider, first fired up in 2008, was starting its third extended run of particle collisions with a series of test collisions.

Their assignment was two-fold: pick out interesting collision events from the early data that would indicate whether the detector was working properly and develop visual displays of those events, showing particles and energy deposits from the accelerator.

They spent weeks learning about the workings of the Large Hadron Collider itself, and then familiarizing themselves with computing tools they would use to access test data and write code to identify collision event candidates.

I remember sitting in the apartment with Dawson on a Friday night in the middle of June, writing code that wasnt quite working, Murphy said. Then, suddenly, it began to work perfectly. It was awesome.

I felt so lucky, said Haslund-Gourley. Id grown up always wanting to work at CERN, this magical land where physicists learn about the forces and particles that make up the universe.

On July 5, the students were on hand for the first collisions using the colliders stable beam, which reached a world-breaking energy level of 13.6 TeV (tera electron volts). They watched a livestream broadcast of collisions, listened to music and waited to run their code.

Once we started, it was a race against time, with some of the best physicists in the world reviewing everything we did, Gainey said.

It was an exciting kind of pressure, Thomas said. They needed the event displays as soon as possible.

One of their first visualizations was used almost immediately in a scientific lecture for the 2022 International Conference on High Energy Physics. It was the first slide in the presentation, in fact.

That was incredibly validating, Haslund-Gourley said.

By all accounts, the groups work was successful.

Our students identified the only publicly available candidates for event displays from ATLAS and they have been thanked and highlighted regularly in collaboration-wide presentations, Demers said.

Aside from their assigned projects, the students said they enjoyed being immersed in an intense, scientific environment far from home. Haslund-Gourley, for example, said she felt inspired by the international nature of collaborations at CERN; Thomas was happy he got the chance to meet some of the physicists he idolized from Particle Fever, the 2013 film documentary that inspired him to pursue physics in the first place.

In their off-hours, the students hiked in the Jura mountains, toured rustic villages in France, and explored the sights in Vienna, Budapest, Munich, and Bern. There were ample opportunities to sample local cuisine, too.

A lot of bread was consumed, Gainey said.

As for the future, Gainey said shell continue her CERN work while at Yale this year, making it the basis of her senior project. Thomas, meanwhile, thinks hes found a serendipitous way to bring topological machine learning methods to bear on particle physics.

Haslund-Gourley, for her part, was inspired by the data science and machine learning techniques used to process collision data at CERN and hopes to apply similar analysis techniques to neurological data.

And Murphy? After a summer at CERN, he says hes got the research bug, big-time. I never felt stressed, he said. We all just hung out, did our jobs, and made it work. I know Ill continue to work at CERN for my thesis.

Visit link:

Bread, backpacks, and bosons a summer at CERN - Yale News

Read More..

Data analysis and intelligent policy designnot good intentionswill fix health care post COVID – Harvard Kennedy School

Soroush Saghafian (intro): The consequences of COVID for, let's say, rural hospitals are very different than for urban hospitals. A lot of these rural hospitals that ended up closing because of COVID, they are the hospitals that are essentially also the large employers in those rural communities. A lot of people also lost their jobs, lost their work, lost their only income. I think for me, COVID gave us the ability to think about all of this more proactively.

Amitabh Chandra (intro): To improve health care in America, we need two things. We need aspiration, and we need evidence. I think we have a lot of the aspiration. What Soroush is saying is we don't have the evidence. Let's not lose the aspiration we have. I think embedded in the Inflation Reduction Act is a well-intentioned aspiration that we want to reduce the price of a variety of drugs. We don't use the best evidence to figure out how to do that.

Ralph Ranalli (intro): Welcome to the Harvard Kennedy School PolicyCast. Im your host, Ralph Ranalli. The COVID-19 pandemic has stretched the U.S. health care system and health care systems across the world to the breaking point and beyond. If theres a silver lining, it may be that there is now the urgency and will among politicians and policymakers to pursue meaningful changes that could result in improved access to health care thats both more affordable and higher quality. A recent example in the US were the health care provisions in the Biden Administrations Inflation Reduction Act, which were hailed as a breakthrough for, at a minimum, finally breaking the pharmaceutical industrys stranglehold on attempts to control prescription drug prices. But as health care policy enters what is widely seen as an inflection point, Harvard Kennedy School professors Amitabh Chandra and Soroush Saghafian say even well-intentioned quick-fix policy changes may end up doing more harm than good. Instead, policymakers need to pursue change with care, by deeply analyzing the weaknesses COVID exposed and using that data to design intelligent policy that can create truly transformational change. Professor Chandra is the director of Health Policy Research at the Kennedy School, and his research focuses on innovation and pricing in the biopharmaceutical industry and value and racial disparities in health care delivery. Professor Saghafian is the founder of the Public Impact Analytics Science Lab at Harvard and his work combines big data analytics, health policy, and decision science to discover new insights and provide new solutions to various existing problems. Theyre here to talk through this important historic moment in health care policy, both in terms of challenges and opportunities.

Ralph Ranalli: Soroush, Amitabh, welcome to PolicyCast.

Soroush Saghafian: Thank you. Thanks for having us.

Amitabh Chandra: Thrilled to be here.

Ralph Ranalli: We've had a couple of major events over the last couple of years that have affected health care. The first that I'm thinking about was the COVID-19 pandemic, obviously a huge one, which in a number of ways stretched the U.S. health care system and health care systems across the world to their breaking points and sometimes beyond. We've recently had the passage in the United States also of the Inflation Reduction Act, which included some seemingly significant affordability provisions for both prescription drugs and health coverage. It seems to me that we're at an interesting inflection point for health care, especially in the United States. Would you agree with that? How would you characterize, just to start, the current moment for health care policy, both in terms of challenges and in terms of opportunities? Maybe, Amitabh, I'll let you tackle that one first.

Amitabh Chandra: That's a great question. I think the key takeaway for me is that the COVID pandemic certainly accelerated a variety of trends that we had already been seeing. We had already started to use telehealth, and we have really accelerated the use of something like telehealth. We had already started to bet on mRNA vaccines before COVID, but COVID might allow us to achieve their full potential.

I feel very differently about the inflation reduction. I really don't think that it is nearly of the same magnitude as the COVID epidemic. It's actually tiny. I mean, it is called the Inflation Reduction Act because we're in a period of high inflation. It's called the Inflation Reduction Act of 2022, but it's really going to do nothing in 2022, or 2023, or 2024. If you look at how much the Inflation Reduction Act will really reduce our deficit, it's going to reduce our deficit by about $20 billion over the next five years. And that is tiny. That is tiny. Over 10 years, it'll reduce the deficit by about $300 billion, which sounds like a lot of money because it is to all of us. But in the context of the balance sheet of the United States government, we're talking about undetectable differences. There's a colossal amount of overselling of the Inflation Reduction Actwhich is different from saying it's a bad act. We should probably spend some time talking about all the good ideas in it. But, it's a tiny piece of legislation relative to the enormous dent that COVID-19 has put into the health of the economy and the health of all Americans.

Ralph Ranalli: Well, I think people are saying about the Inflation Reduction Act, if you look at the optimistic takes on it, that it may be a tiny step, but it's a tiny step in a space where we were getting no steps at all. And that particularly in the area of controlling prescription drug costs in the United States, it was seen as breaking the grip ever so slightly of big pharma in terms of drug pricing. Do you take any positives away from just the spirit of it, as opposed to the admittedly tiny magnitude of it?

Amitabh Chandra: It is true that it's the first time that Pharma, the Trade Association of Pharma, and Bio, the other trade association, have actually lost. In that sense, is there some significance to Congress passing legislation that hurts the revenues of the pharmaceutical industry, is that salient? Absolutely. But, that's quite different than saying, "We have taken a big piece out of the industry." Let me just explain and put all of this into context. If you think about how much money we will save as a result of the act's provisions in the drug pricing part of the act, over 10 years we'll save, if you're optimistic, about $300 billion. That's about $30 billion a year. Just to be clear, those savings won't kick in for many years. But let's just assume that over the... Let's just be optimistic and say they'll kick in right away, which, just to be clear, the act, it's not written that way. But, we're going to be saving about $30 billion a year. That sounds like a lot of money. But look, annual spending on drugs is about $450 billion a year. This is less than 10%. Then the question becomes, is this the right way to even do it? There are a variety of drugs where the government is appropriately worried about that high price. But what you see in the Inflation Reduction Act is an extremely arbitrary way at getting at solutions to reduce the price. The idea here, and the act is, let's pick 10 medicines in 2026 and negotiate the price of 10. Let me assure you that we're having problems with many more than 10 medicines. Then when we get up to 2029, when the full act takes place, we're talking about negotiating the price of 20 medicines. We're talking about 10 and 20 medicines. We know that there are thousands of medicines out there. Again, is that really the right way to go about doing it? Then, is this really even negotiation? Because as you know, the way the act works, if a manufacturer does not give the government the price that the government wants, then the government is able to put a colossal excise tax on the manufacturer. Is that really negotiation? The tax is colloquially described as being somewhere between 65% of total sales for drugs all the way up to 95% of sales. Is that really negotiation? That's something we really have to grapple with. I think we call this negotiation, but is it really negotiation if I walk up to you, or my friend Soroush, and say, "Soroush, I'm going to beat you up if you don't hand me your wallet"? Then Soroush hands me his wallet. Is that really negotiation? Other companies don't negotiate in this manner. Negotiation means you offer a price. And if you don't get the price, you don't get the drug. You don't get to then use the full taxing authority of the United States government to get the price that you want. If you start to do things like that, it will affect the pharmaceutical companies' incentives to be in the pharmaceutical business.

Soroush Saghafian: Maybe I wanted to just add a little bit of perspective to this, Ralph, is that I think it will help just to step back a little bit and think about the health care sector for a little bit. We are spending about $4 trillion. That's 20% of our GDP. If you think about it per capita, it's about 2.5 times the average of other OECD countries. At the same time, the latest sort of ranking show that in terms of life expectancy at birth, let's say, we are 31st. You look at the maternal mortality or infant mortality, you look at the obesity rate, you look at the heart disease rates, you look at HIV, diabetes, across all this outcomes, we are not doing well. Now, prices are obviously one part of the problem, but it's a very small part of the whole picture to me. There are three things in the act that I think are trying to get to the expenditure. One is that, well, obviously will give more power to Medicare to negotiate Part D and Part B drugs. The other thing is that, well, it will expand some eligibility for low income Part D subsidies. For instance, it was 135%. Now they're saying, "We are going to move it to 150% of the federal poverty limit. The other part of the act that I think is effective is putting a cap on the expanding. But all of this all together, if successful over 10 years we are thinking about... The latest estimate is about 287 billion. Or if you're optimistic, as Amitabh said, let's say 300 billion over 10 years. That is not much. To get to the main issues of the health care to me, I think we have to think more about how to be efficient and effective, not just reducing prices. Obviously, it's important to do that, but it doesn't get us to where we want to be in a big picture in terms of the health care.

Ralph Ranalli: I'm glad you said that because I did want to return to that big picture. Soroush, in February, you looked at pandemic-related hospital closures and the changes they're causing and what information came out of that that could be useful for policymakers. But you also said there were unique opportunities for researchers to conduct studies that can shed light on what you call different implications, trade-offs, and consequences of various strategies that can be followed. Going back to COVID, turning away, I guess, from the IRA for a minute, what do you see in the big picture as the opportunities created for research that informs policy coming out of the pandemic?

Soroush Saghafian: That's a very good question. First of all, I think COVID was a very serious stress test for hospitals. We learned whether the hospitals can be flexible enough to essentially handle demand shocks like COVID. Are they flexible enough to shift their resources from elective surgeries, for instance, to something that is more urgent? If you think about this as in supply chains, we think of them as disruption risks. We try to stress test supply chains. But in hospitals, we haven't done much of this, and so this was a serious test for us to see whether the health care systems, the providers are able to handle demand shocks.

The second thing was the government tried to now support hospitals that are really in bad situation, because as you know, may know, a lot of hospitals had to stop their elective surgeries and try to handle more of the COVID patients, which in terms of their profit there, the margin is really low for hospitals. It put a really heavy financial stress also, not just operational stress, but also financial stress, over hospitals. The government then tried to introduce this CARES Act, which was Coronavirus Aid Relief and Economic Security Act, which gave $175 billion to health care providers that are dealing with all the financial consequences of coronavirus and other thing. Now, what is important here is that we have to learn how to use money like that, how to use our limited resources more efficiently. That is, how do we allocate this to hospitals and to different providers? The consequences of COVID for, let's say, rural hospitals are very different than for urban hospitals. A lot of these rural hospitals that ended up closing because of COVID, they are the hospitals that are essentially also the large employers in those rural communities. A lot of people also lost their jobs, lost their work, lost their only income. I think for me, COVID gave us the ability to think about all of this more proactively.

The last piece is about how do we support R&D activities of vaccine providers, of other organizations that are trying to, for instance, now use AI methods and other methods to predict the next pandemic, and to allow us to alleviate it by either developing related vaccines in advance or by trying to just have some alarm systems when these things come up. I got a call in February from the government of Bahrain. They ask us to analyze some data because they thought there's going to be a pandemic. This is before WHO announced the pandemic. There are signs and there are technologies now that we can use to be ready for the next pandemic. I think that's very important for us to put all these lessons together and try to make sure that the next pandemic is not going to change everybody's life as COVID did.

Amitabh Chandra: That's such a great point. Can I jump in?

Ralph Ranalli: Absolutely.

Amitabh Chandra: I, just listening to Soroush, realize that it is so easy to criticize various parts of the COVID response. We were definitely underprepared for it. That said, we know that these pandemics will happen again. Even before COVID, there was SARS. There was MERS. there was H1N1. There was Ebola. Pandemics have affected other parts of the world. They've not really shown up on our shores since the influenza pandemic over a hundred years ago, so we should be prepared for the next pandemic. One way to prepare for the next pandemic is to learn what we did well and what we did badly in this pandemic.

Soroush mentioned the $187 billion that the federal government allocated to providers, hospitals, and doctors. Well, that was an extraordinary piece of legislation because Congress was able to work together during the pandemic to pass the legislation. Now, what we failed on, though, was we failed in figuring out who should get the money. If you look at who should get the money, one of the things that we found was that the money went to extremely well-resourced hospitals, not the ones that Soroush is worried about, and not the ones that Congress is worried about, and not the minority-serving hospitals and doctors. The money disproportionately went to wealthy hospitals in Boston, and wealthy hospitals in Los Angeles. The lesson is let's compliment Congress for the CARES Act and the speed of the CARES Act. But the next time something like this happens, pandemic or catastrophe, let's be sure that the formulas we're using to determine the allocation are not as biased, not as blunt. It's not that we sent the money out even randomly this time. We sent the money to wealthy places. That was easily avoidable.

Soroush Saghafian: Absolutely.

Amitabh Chandra: I think the other point that Soroush is making, which is something that I don't see a lot of discussion around, is that the elective surgeries that got delayed, some of them never came back. Hospitals have closed in rural areas. Some physicians have retired. Many nurses have retired. What do we do about the fact there was a 70% decline in mammograms, a 70% decline in colonoscopies? The consequences of those delayed diagnosis will reverberate for years to come.

Ralph Ranalli: That's remarkable.

Amitabh Chandra: And we have to. This is the moment to think about how to bring all of those people into the system. Because the next time there's a pandemic, even if we allocate the money properly, there will be many preventable cases of non-communicable disease. I'm talking about heart attacks and cancer. I'm not talking about COVID. I think the issues that Soroush raises are incredibly complex, incredibly rich. Connecting COVID to the Inflation Reduction Act, it's not like the Inflation Reduction Act takes on these really big challenges.

Soroush Saghafian: Absolutely.

Ralph Ranalli: It seems to me that where the Inflation Reduction Act discussion that we've had so far and the COVID response criticisms that we've explored so far explored, where they intersect is a disconnect between good intentions and good policy. We want to put money somewhere. We have to solve a problem. But in the middle, we've got bad policy that's making things, in some cases, worse, but definitely not as good as they could be. How do we get to those good policies using what we've learned, and how do we get them adopted in a way that's helpful and meaningful.

Soroush Saghafian: I can answer that question based on some sort of research that we did. Let's think about what Amitabh mentioned about hospitals closing. I agree with all the great points Amitabh is bringing up. Let's first think about what happens when a hospital closes. The surrounding hospitals now have to handle the patients that used to go to those hospitals. Now what do those hospitals do? There's a lot of discussion by policymakers about these hospitals that close, they are not efficient. So let them close so we only will have efficient hospitals and efficient providers remaining. We did this research to see whether that's correct or not. It turns out that what the surrounding hospitals, remaining hospitals do... Do they become more efficient or not? At some level, we find that they become more efficient. Because with their current resources, with their current number of beds, providers, et cetera, they are serving more patients. Per resource, essentially, they are serving more patients. From that perspective, you can argue that it's good to... For the efficiency of the system, it's good to let those hospitals close.

But, it turns out that what they do is that they are not essentially improving their efficiency by, for instance, increasing their bed utilization. What they do is that they just speed up the care. Speeding up the care can be a good thing or it can be a bad thing. If you're removing some of the extra steps that are redundant, extra tests that we know some providers do, not value-added, essentially, steps, that's a good thing. But what we find is that they're cutting some value-added steps for most of the patients. If you look at the impact on things like mortality or readmission rate and things like that, we are seeing that there's an increase. Now policymakers have to think about those things, that if you're allocating money to prevent hospitals from closing. They've been thinking about access to care, but that's only one dimension. As Amitabh mentioned, these policy questions are extremely complex. You have to think about various dimensions of that. One dimension is access that they think about, "If a rural hospital closes, people lose their access." But there are other things. There's quality implications. There are implications in terms of outcomes that we need to think about. Previously, we've had policies to help rural hospitals from closures and things like that. If we want to summarize them, none of them have been working, essentially. What it tells us is that we still don't know how to allocate money, how to allocate limited resources.

To me, if you want to make the health care system more efficient... Well, what does efficiency mean? It means that you are using your limited resources more intelligently. You have to learn how to allocate those limited resources intelligently. I think we need more research and more thinking rather than thinking about these allocations of money and other policies that we've had as immediate thinking of, "We need the money. Let's pass this legislation. Let's allocate the money." Then what happens is what Amitabh is saying, is that the MGHs of the world, Mayo Clinics of the worlds, they get the money. The hospitals that would remain and would be efficient, they don't get the money. To me, we need more thinking and more research in this area.

Ralph Ranalli: Amitabh, how do we get to that place where good policy research is informing these decisions in a much more robust way?

Amitabh Chandra: To improve health care in America, we need two things. We need aspiration, and we need evidence. I think we have a lot of the aspiration. What Soroush is saying is we don't have the evidence. Let's not lose the aspiration we have. I think embedded in the Inflation Reduction Act is well-intentioned aspiration that we want to reduce the price of a variety of drugs. We don't use the best evidence to figure out how to do that. We let Congress figure these issues out at moments of incredible urgency. We're taking on prescription drug prices and calling it the Inflation Reduction Act because inflation is high right now. Prescription drug prices being high is probably an issue that has affected Americans for a long time. It doesn't have to do with this current spike in gasoline prices. Passing major pieces of legislation at the wrong time doesn't seem like a good recipe. But, I think that a lot of the pieces to get the right legislation are already in place in the United States.

I'm a big believer in the work of the technocratic agencies, for example. I'm thinking here of agencies like the Congressional Budget Office, which scores the Inflation Reduction Act and had bad news for Congress. It said, "You think you're saving all this money. You think you're going to cure us of inflation. You're not. Because at the end of the day, you're going to save about $290 billion over 10 years. That's not going to do anything to the deficit. And most of the savings come many years from now." Take another technocratic agency, like the FTC. The FTC has been very worried about hospitals consolidating in part because they're running out of cash, or they say they're running out of cash. Because one of the things we know from our colleague, Leemore Dafny's work, is when hospitals merge, they often raise prices, and some might actually lower quality as well. The FTC has been acutely aware of that phenomenon. That's a good thing because they prevent hospitals from merging and raise prices. Another technocratic agency that really did extremely well during COVID was the FDA. I think the FDA's emergency approval pathways, the partnerships that it forged with vaccine manufacturers is exactly the template not only for the next pandemic but for the approval of all drugs. Often, I think FDA scientists know a lot more about the sorts of worries that you might have about an untested medical technology than physicians or patients might have. I think three technocratic agencies certainly give us, or give me, an enormous sense of optimism. Relying on Congress's aspirations paired with these technocratic agencies is the right formula for taking on the very big challenges ahead of us.

Ralph Ranalli: Now, Soroush, in terms of finding evidence, you're affiliated with, among others, Harvard centers that study health policy but also decision science and data analytics. You teach big data and machine learning. How do those areas come together to provide useful insight into real world health care?

Soroush Saghafian: One of the big issues, as Amitabh pointed out, in improving the health care sector is to use evidence. That's where I think about the use of data and use of technology... We have ignored that. There are enormous opportunities that we are thinking of for the future of health care. For instance, we are now working on projects related to... And a lot of other researchers are doing that as well. In terms of using evident AI technologies that are trained over datasets, and now we are putting them on mobile apps. We are trying to prevent patients from going to the hospitals because their phones can be their doctors. If you think about the traditional ways of... Unfortunately, our policies are designed for the traditional health care system where the delivery of care is bounded by the physician and the patient being at the same location, being either the hospital or the office of the physician or things like that. With the new technologies now, with the algorithms, with the cell phones, variables, et cetera, we don't have to have the patient and the physician provider be at the same location. The cell phone can now be the provider.

We need to use more evidence of how do we train these algorithms correctly. That's where part of the things we are doing in my lab is trying to train these algorithms correctly over large amounts of data so that they intervene correctly, they prevent patients from needing to go to the hospital. As I mentioned, COVID did a stress test on hospitals. Their beds were full, and so they couldn't handle the demand. Now, maybe if their cell phones were their providers, we could prevent them from going to the hospitals.

Ralph Ranalli: Didn't COVID actI know this is probably the worst analogybut almost as an icebreaker

Soroush Saghafian: Absolutely.

Ralph Ranalli: ... to retrain people that they didn't necessarily have to go physically to see their doctor, but they could have a telehealth appointment, still feel comfortable that they were getting good health care?

Soroush Saghafian: Absolutely. Amitabh pointed to this that we were thinking about telehealth before COVID. There were regulations that prevented them from being under mass use. Policymakers then decided, legislators, that we have to remove these things because we cannot work with those restrictive policies anymore that restricts telehealth, for instance. I think, again, going back to the point that a lot of these policies and regulations have been designed for the traditional health care system, with COVID ... I think one of the silver lining things was that it allowed us to rethink these things and try to think about the future of the health care systems in a little bit more sort of proactive way.

Amitabh Chandra: I think where that takes us, Soroush, is that if you think of a technology like telemedicine being a substitute for an inpatient encounter, when I think about all the delayed cancer diagnosis that happened, I think, "Wow. Doesn't that mean that we as a nation should be investing much more in at-home testing for cancer?" I know there's a very active debate between a company called Cologuard, which allows you to do ... It's a substitute for going to the physician's office for colonoscopy. I think a lot of physicians would say it's not a substitute for an actual colonoscopy. Maybe it is. I'm not here to take a position on that. But, isn't the point that that kind of innovation, if we made it better, would be a way to get people access in ways that reduce the dependence on the office visit, which may be better than the at-home test? But, adherence with the office visit is going to be exactly zero if there's a pandemic, or if you are a poor person who can't get to the doctor's office.

There's many reasons to think that better medical innovation, not only in the sense of diagnostics but also in the sense of treatment, is where we want to be. The Inflation Reduction Act, for example, reduces the out-of-pocket on insulin, to $35 a month. I think that's a wonderful thing. I think it's an absolutely wonderful thing. But, it does make me ask two questions. First, why is it even $35 a month? I mean, who's overusing their insulin? Just to be clear. Which insulin patient is like, "Because I face the $35 a month, I use my insulin optimally." It sort of highlights how terrible even the thinking around this well-intentioned idea was. Second, what about all the other medicines that patients need where the out-of-pocket is incredibly high for them? But third, why are we still treating diabetes with insulin? Why have we not figured out how to cure insulin? Literally cure it, not just treat it. What would it take for our scientists at Harvard and around the world to think about ideas from regenerative medicine to create better cells in the pancreas that actually secrete insulin so that a diabetic would go in for a procedure and, in some ideal world or in some future world, will just never had diabetes again? That is where we want to be, because it's only then that we have really reduced disease. To your point, Soroush, it's only then that we've reduced a lot of these disparities, a lot of these wedges that actually affect the less privileged a lot more than the more privileged.

Soroush Saghafian: Yeah. I wanted to also add the other aspect of this, which is in terms of malpractice cases, for instance, in the hospitals and other providers. If you want to improve the health care system, a large part of the costs are coming from malpractice cases. It hurts patients. It hurts providers. You think about the number of lawsuits that physicians are going through. Think about how many patients die because of malpractices and things like that. Well, guess what? We have AI now. We have a large amount of data. We have the ability to train all these algorithms over large... We have IBM Watson, which was sold to a private equity company recently. But, we have the ability to use all this technology to remove malpractices, but also, to Amitabh's point, improve drug development, find cures. Now, a lot of these companies are realizing that, well, AI is enabling us to speed up drug development, find new cures for diabetes and other diseases. From a policy perspective, I think it's important for policymakers to think about those things. When we talk about efficiency being intelligent allocation of resources, why are we putting all of our efforts to reduce prices but not thinking about other important aspects?

Ralph Ranalli: I wanted to wrap up by just asking you bothwith all this learning that we've done over the pandemic and all the tools that are available to us now, what is your hope, your best-case scenario for what we can achieve in terms of making a health care system that works better than the one we have now? Soroush, maybe you want to take this first.

Soroush Saghafian: Yeah, sure. I think part of it for me is being more optimistic about using technology. We have now projects that are trying to focus on using AI for predicting new pandemics and things like that. We have to think more carefully about how technology can help the health care sector, how the private companies that are developing all these technologies can help the policy aspects of things and can integrate things. For me, I think these technologies that, for instance, Amitabh mentioned, allow us to find better solutions for diabetes, trying to move patients out of hospitals. There are many researchers that are now working on home health, which means that the hospital is going to be at your home. You don't need to go to the hospitals, that's the traditional way. I think for me, one is about moving from traditional systems to more innovative ways of delivering care.

The second part for me is about using data to improve the quality of care. We haven't done much about that. For instance, the Trump administration started doing transparency on prices. They said, "All the hospitals and providers have to announce their prices." Nothing on the quality side yet. We've had a lot of papers published on public reporting of hospital outcomes of the quality of care, but we haven't done much about it. How do we improve the quality that is delivered to the patients using technology, using evidence and data? To me, those can help. I'm also very optimistic about moving towards value-based payments. The traditional, again, ways of delivering care in the US have been all about volume-based. The more patients you serve, the more profit you generate, the more incentives you have. The push to value-based health delivery, to me, hasn't gone that far. There are issues, obviously negative things, about it that need to be resolved. But at the end of the day, we have to think about if we want to have an innovative system, how do we move from this old volume-based delivery to value-based delivery.

Ralph Ranalli: Amitabh, I'll let you have the last word.

Amitabh Chandra: I agree with everything Soroush said. If I can just add to what he said, I want us as a country to grapple with two challenges, but the first is long-term care. We discussed it in a very short way when we were trying to pass the Affordable Care Act, but it got taken out. But the reality is that we have millions of Americans who as they age are suffering from Alzheimer's, Parkinson's, other forms of dementia, other mobility problems. They don't need hospital care. They don't need to go to the doctor, but they need long-term care. And the United States does not offer a long-term care benefit. Medicare does not offer a long-term benefit. The United States hss really not grappled with the colossal cost of that long-term care benefit. Because it's going to be expensive, we say we don't want to do it, but I think that's a very cruel solution. I think it also impedes exactly the innovation that Soroush wants us to engage with.

The second thing that I would encourage is more innovation, not just for the diseases where we have innovation but the diseases where we have no innovation. Go back to Alzheimer's, for example. We don't have a meaningful disease-altering drug for Alzheimer's. One is still likely to be several years or decades away. One thing that I've learned from COVID is that relatively small amounts of money can actually induce massive amounts of innovation. That was my big takeaway from COVID. I mean, if you go back to Operation Warp Speed, the government said, "Here's about $13 billion that we will give manufacturers who are successful, who successfully develop COVID vaccines." We had something like five to 10 manufacturers jump into that race, and we got many successful vaccines out of it. That was people chasing a $10 to $13 billion prize. Now, what if we used that kind of thinking to announce similar prizes? I realize the government didn't call it a prize, but it was functionally a prize because you only were able to tap into the money if you developed a successful vaccine. What if we said, "We're going to use that kind of thinking to create other vaccines for other diseases"? Maybe it's hepatitis B. Maybe it's river blindness. Maybe it's medicines that delay cognitive decline in dementia. It might not take a lot of money to create a transformational change in the amount of suffering that patients have right now, but that will require us to spend more, not less. That's something that we have to come to terms with, is that if the extra spending was worth it, then we should absolutely be doing it

The consequences of COVID for, let's say, rural hospitals is very different than for urban hospitals. A lot of these rural hospitals that ended up closing because of COVID, they are the hospitals that are essentially also the large employers in those rural communities. A lot of people also lost their jobs, lost their work, lost their only income. I think for me, COVID gave us the ability to think about all of this more proactively.

Well, let's hope some of these great policy ideas actually are able to make a difference. I want to thank you both for being here. It's been enjoyable and an education. I really appreciate it.

Soroush Saghafian: Thanks for having us. Thank you.

Amitabh Chandra: Thank you for a great conversation.

Ralph Ranalli (Outro): Thanks for listening. Please join us for our next episode, when well welcome Harvard Kennedy School Professor Daniel Schneider for a discussion about The Shift Project and his research on the ripple effects that precarious employment and unpredictable scheduling have on workers and the broader economy. If you have a suggestion for a future show or a question, please email us at policycast at H-K-S dot Harvard dot E-D-U. And until next time, remember to speak bravely, and listen generously.

Read the original:

Data analysis and intelligent policy designnot good intentionswill fix health care post COVID - Harvard Kennedy School

Read More..

Campus Construction Updates and Ongoing Projects – The UCSD Guardian Online

With the commencement of Fall 2022 and UC San Diegos rapid expansion, students and staff are faced with ever-changing campus topography upon returning to campus. Currently, UCSD has 16 projects across its main and expanded campus. The following is The UCSD Guardians guide to projects set to debut in the 2022-23 school year.

Epstein Family Amphitheater

Opening this fall, a 2,650-seat, open-air venue located in Earl Warren College is set to host a variety of shows, ranging from theatrical dance to rock concerts, as well as to be a space to be used by the theater, dance, visual arts and music students for performances. The amphitheater is in close proximity to the Blue Line trolley, in hopes of weaving UCSD into the larger San Diego artistic community.

The total project cost is reported to be $67.9 million, including a $10 million donation from Daniel and Phyllis Epstein. The Epstein family has previously contributed to the university by donating $25 million for Alzheimers research, as well as donating to ArtPower at UC San Diego and the San Diego Symphony.

The amphitheater is set to have approximately 300 performances each year, debuting with performances in October by the San Diego Symphony and artist Niki. Performances are planned to be offered to UCSD students for free or at an accessible cost.

To view the upcoming performances or reserve tickets, visit the website here.

Franklin Antonio Hall

The four-story building in Jacobs School of Engineering opens Fall 2022. The building is named after Qualcomm co-founder Franklin Antonio, who donated $30 million to the construction of the project. Antonios contribution was based on the premise that the building must prioritize student-faculty collaboration.

I dont like the idea of professors being behind a locked door, Antonio told Triton Magazine. One of my main requests for this building was that students be able to access professors offices and have direct interaction. Undergraduates, especially they benefit tremendously from direct interaction with professors.

To fulfill the late Antonios wishes, the building features 13 collaboratories, open spaces within the building aimed to facilitate connections and cooperation between professors and students. The $180 million building also houses a 250-seat auditorium and two 100-seat classrooms. An estimated 25% of the Jacobs School faculty will be situated in the hall.

Mandeville Art Gallery Renovations

Set to complete this Fall, the gallery renovations will include infrastructure updates as well as adding a new external entry. The new entry will display LED lights upon a canopy that will showcase dynamic images to the public plaza.

Data Science Institute

By Winter 2023, The Data Science Institute (DSI), currently located in the San Diego Supercomputer building, will be reloaded to the Literature building in Warren College. The building has been used by the literature department since 1990 but with the completion of North Torrey Pines Living and Learning Neighborhood (NTPLLN), the department has since moved into their office spaces in NTPLLN. The building is undergoing renovations which include a new facade for the building, new entryway and lobby, restroom renovations, the addition of gender-inclusive restrooms, and the general revitalization of office spaces and internal infrastructure.

York Hall

The 26 year-old building in Roger Revelle College currently covered by tapestries is undergoing seismic improvements set to be completed by Winter 2023.

Pepper Canyon West Living and Learning Neighborhood

While it wont be completed for another 2 years, the Pepper Canyon West Living and Learning Neighborhood is one of the universitys most widely-anticipated projects. UCSD received $100 million of state funding for this project from Governor Gavin Newsoms revised state budget proposal. The neighborhood aims to ease the housing demand in the La Jolla area by providing an additional 1,300 single-occupancy rooms to transfer and upper-division undergraduate students by Fall 2024. The project includes two 22- and 23-story towers which will be connected to five-story buildings, surrounded by outdoor terrace seating, retail space, as well as two large courtyards with access to canyon trails.

To learn more about current construction projects, please click the link here.

Image courtesy of UC San Diego Jacobs School of Engineering

Read the rest here:

Campus Construction Updates and Ongoing Projects - The UCSD Guardian Online

Read More..

September: Blood clots and COVID-19 | News and features – University of Bristol

COVID-19 infection increases the risk of potentially life-threatening blood clots for at least 49 weeks, according to a new study of health records of 48 million unvaccinated adults from the first wave of the pandemic.

The findings suggest that the COVID-19 pandemic may have led to an additional 10,500 cases of heart attacks, strokes and other blood clot complications such as deep vein thrombosis in England and Wales in 2020 alone, although the excess risk to individuals remains small and reduces over time.

The research involving a large team of researchers led by the Universities of Bristol, Cambridge, and Edinburgh, and Swansea University shows that people with only mild or moderate disease were also affected. The authors suggest that preventive strategies, such as giving high-risk patients medication to lower blood pressure, could help reduce cases of serious clots.

Researchers studied de-identified electronic health records across the whole population of England and Wales from January to December 2020 to compare the risk of blood clots after COVID-19 with the risk at other times. Data were accessed securely and safely via the NHS Digital Trusted Research Environment for England, and the SAIL Databank for Wales.

In the first week after a COVID-19 diagnosis, people were 21 times more likely to have a heart attack or stroke, conditions which are mainly caused by blood clots blocking arteries. This dropped to 3.9 times more likely after 4 weeks.

The researchers also studied conditions caused by blood clots in the veins: these include deep vein thrombosis and pulmonary embolism a clot in the lungs that can be fatal. The risk of blood clots in the veins was 33 times greater in the first week after a COVID-19 diagnosis. This dropped to eight times higher risk after four weeks.

The higher risk of blood clots after COVID-19 remained for the study duration, although by 26 to 49 weeks it had dropped to 1.3 times more likely for clots in the arteries and 1.8 times more likely for clots in the veins.

Most previous research studied the impact of COVID-19 on blood clotting in people hospitalised with COVID-19. The new study shows that there was also an effect on people whose COVID-19 did not lead to hospitalisation, although their excess risk was not as great as for those who had severe disease and were hospitalised.

The authors say that the risk of blood clots to individuals remains low. In people at the highest risk men over the age of 80 an extra 2 men in 100 infected may have a stroke or heart attack after COVID-19 infection.

The data analysed was collected in 2020, before the mass vaccination rollout in the UK, and before more recent COVID-19 variants such as Delta and Omicron were widespread. The researchers are now studying data beyond 2020 to understand the effect of vaccination and the impact of newer variants.

The research is published in the journal Circulation and was supported by the BHF Data Science Centre at Health Data Research UK, the Longitudinal Health and Wellbeing COVID-19 National Core Study, Data and Connectivity National Core Study and the CONVALESCENCE study of long COVID.

Jonathan Sterne, Professor of Medical Statistics and Epidemiology at the University of Bristol, Director of the NIHR Bristol Biomedical Research Centre and Director of Health Data Research UK South West, who co-led the study, said: We are reassured that the risk drops quite quickly particularly for heart attacks and strokes but the finding that it remains elevated for some time highlights the longer-term effects of COVID-19 that we are only beginning to understand.

Angela Wood, Professor of Biostatistics at the University of Cambridge, Associate Director of the British Heart Foundation Data Science Centre, and study co-lead, said: We have shown that even people who were not hospitalised faced a higher risk of blood clots in the first wave. While the risk to individuals remains small, the effect on the publics health could be substantial and strategies to prevent vascular events will be important as we continue through the pandemic.

Dr William Whiteley, Clinical Epidemiologist and Neurologist at the University of Edinburgh, who co-led the study, said: The effect that coronavirus infection has on the risk of conditions linked to blood clots is poorly studied, and evidence-based ways to prevent these conditions after infection will be key to reducing the pandemics effects on patients.

Paper

Association of COVID-19 with major arterial and venous thrombotic diseases: a population-wide cohort study of 48 million adults in England and Wales by Jonathan A.C. Sterne et al. in Circulation [open access]

About the BHF Data Science Centre The British Heart Foundation Data Science Centre is apartnershipbetween Health Data Research UK (HDR UK) and the British Heart Foundation (BHF).We work closely with patients, the public, NHS organisations, researchers, and clinicians to promote the safe and ethical use of data for research into the causes, prevention and treatment of all diseases of the heart and circulation.

Our vision is to improve the publics cardiovascular health through the power of large-scale data and advanced analytics across the UK. Funded by the BHF and embedded within HDR UK, the Centre provides the leadership, co-ordination and engagement needed to deliver this vision, through building capability, capacity and infrastructure to drive excellence in data-enabled cardiovascular research.

To find out more about the BHF Data Science Centre, visit http://www.hdruk.ac.uk/help-with-your-data/bhf-data-science-centre/, email bhfdsc@hdruk.ac.uk or follow us on Twitter @BHFDataScience

About the National Institute for Health and Care Research Bristol Biomedical Research Centre (NIHR Bristol BRC)The National Institute for Health and Care Research Bristol Biomedical Research Centres (NIHR Bristol BRC) innovative biomedical research takes science from the laboratory bench or computer and develops it into new drugs, treatments or health advice. Its world-leading scientists work on many aspects of health, from the role played by individual genes and proteins to analysing large collections of data on hundreds of thousands of people. Bristol BRC is unique among the NIHRs 20 BRCs across England, thanks to its expertise in ground-breaking population health research.

About the COVID-19 Longitudinal Health and Wellbeing National Core Study

About SAIL DatabankSAIL Databank is a rich and trusted population data repository based within Swansea University Medical School. It contains billions of person-based records relating to health and administrative data, some dating back more than a quarter of a century. SAIL Databank is accredited to the highest international standards for an information management system (ISO27001). It exists to make discoveries and develop policies that improve lives by providing approved researchers with secure, linkable and anonymised data that can be accessed and analysed from anywhere in the world.

More information available at http://www.SAILDatabank.com

Read the original here:

September: Blood clots and COVID-19 | News and features - University of Bristol

Read More..

Sex, Selection and Biodiversity – Syracuse.edu – Syracuse University

Scientists generally agree that evolutionary biology was born in 1859 with the publication of Charles Darwins On the Origin of Species. The idea that species can mutate (i.e., change over time) was not new. Decades earlier, Darwins grandfather, Erasmus, had proposed something similar, designing a ladder-like diagram to show how humans evolved from single-celled organisms. Darwin went a step further, suggesting that natural selection was the mechanism by which species adapted to their environments.

But theres more to the story, admits Steve Dorus, associate professor of biology at Syracuse University. Darwin surmised that natural selection wasnt just about survival. He argued that some of the most dramatic differences between species were reproductive traits like ornaments and armaments, says Dorus, referring to peacock tails and beetle horns, respectively. These traits came about because they were subjected to a type of selection associated with reproductive competition.

Darwin called his new theory sexual selection, which he outlined in his 1871 tour-de-force, The Descent of Man, and Selection in Relation to Sex. Whereas Origin sidestepped human evolution, Descent tackled it head-on. The thought of males vying for access to females, who, in turn, desired the biggest, most attractive mates, brought evolution into sharp focus. Natural selection and sexual selection explain how species have evolved over time, Dorus adds.

Analyzing the origins of biodiversity is at the heart of the Center for Reproductive Evolution (CRE) in the College of Arts and Sciences. Housed in the biology department, the CRE explores patterns and processes of sexual selection, including their underlying molecular mechanisms and genomic consequences.

The center was co-founded by Dorus, Weeden Professor Scott Pitnick and Professor Emeritus John Belote in 2016. A shared interest in the study of reproductionalong with a recognition of the potential synergism of combining our research efforts, Pitnick sayspersuaded everyone to join forces. The 2019 appointment of Assistant Professor Yasir Ahmed-Braimah has brought additional expertise in genomics and bioinformatics.

Our philosophy is grounded in interdisciplinary science, says Dorus, who, like his colleagues, studies diverse biological systems, including flies, beetles, mammals, birds and fish. The Center for Reproductive Evolution offers complementary approaches to fundamental questions about sexual and ecological selection, diversification and speciation, and evolutionary genetics and genomics.

The teams workhorse is the common fruit fly. Formally known as Drosophila, this small, ubiquitous creature is one of the oldest, most effective genetic model organisms. That they are easy and inexpensive to culture in a lab environment is a boon to the CRE.

And thanks to new and emerging technologies (along with funding from agencies like the National Institutes of Health and National Science Foundation), the CRE is helping rewrite the rules of biological research. The center is collecting, storing, analyzing and disseminating information like never before, Dorus says. What was once impossible is now commonplace.

The CRE is part of the Universitys Big Data and Data Analytics research group. Established in 2018, the group develops and applies data analysis methodologies to various fields, including genomicsthe study of an organisms genes. Working at the nexus of evolutionary biology, genomics and computer science means dealing with copious amounts of data, says Dorus, who helped found the group with Pitnick and several others, including Professor Chilukuri Mohan, an artificial intelligence expert in the College of Engineering and Computer Science.

In addition to resolving behavioral, morphological and physiological mechanisms of reproduction, the CRE excels at genetic mapping and characterizationdetermining the location and function of genes that confer specific phenotypes. Such research explains why individuals of a species often have similar, but rarely identical characteristics. (Think eye color, skin tone or face shape in humans.) Genetic mapping also provides insights into complex evolutionary processes stretching back millions of years.

Ahmed-Braimah is part of a new wave of Syracuse scientists fluent in omics-based technologies and advanced algorithms. (Omics refers to subdisciplines like genomics, transcriptomics, proteomics and metabolomics.) Technology is rewriting the rules of biological research, says Ahmed-Braimah, the Big Data group's first biology hire. Whereas we used to have lots of theory and little data, were now inundated by data.

To appreciate the science of the CRE is to understand the complex relationship between sperm and the female reproductive tract (FRT). Only since the 1950s have scientists confirmed that the FRT plays a key role in sperm maturation, a process in which sperm cells become competent to fertilize eggs. Sometimes sperm are not compatible with the FRT where they reside, leading to what is known as idiopathic infertility. Its a major human health burden, says Dorus, adding that the disease strikes about 30% of infertile couples worldwide.

Technology is rewriting the rules of biological research. Whereas we used to have lots of theory and little data, were now inundated by data.

Caitlin McDonough-Goldstein G20, a postdoctoral researcher at the University of Vienna, became interested in idiopathic infertility while a student at Syracuse. Under Dorus and Pitnicks supervision, she tested thousands of tissue samples from Drosophila FRTs. Analyzing the flies gene expression and protein production helped McDonough-Goldstein understand the FRTs molecular nature. It also made her realize how changes after mating can regulate reproductive events and ensure fertility.

McDonough-Goldsteins work serves as a blueprint for other studies of ejaculate-by-female interactions. For instance, it has informed those by former CRE postdoctoral researcher Erin McCullough, now an assistant professor of biology at Clark University in Massachusetts, and former Syracuse Ph.D. student Emma Whittington G19, a postdoctoral research fellow at the University of Oslos Natural History Museum.

Whittington, in fact, discovered that female-derived proteins contribute to sperm composition in the FRT. Although the precise ramifications of her findings are still being evaluated, they suggest that males and females contribute to sperm production. The development of sperm transcends the male and female reproductive tracts, requiring sophisticated molecular continuity and cooperation between the sexes, says Dorus, adding that Whittingtons findings were the subject of a recent cover story in Proceedings of the National Academy of Sciences.

Zeeshan Syed, a fourth year CRE postdoctoral researcher, revels in laboratory and computational biological research. Witness his involvement with the CREs Drosophila Evolutionary Phenomics (DEP) project, which considers the evolution of biodiversity on an unrivaled scale.

Syed is part of a team of researchers quantifying about 25 complex traits in 150 different species of fruit flies. The traits include body dimensions, sex-specific lifespan, patterns of reproductive aging, sperm and egg morphology, courtship and remating behavior, to name a few. Its work thats 50 million years in the making, he says.

Involving colleagues from Cornell and Stanford universities, the DEP project aims to sequence the full genomes of all 150 species. Its hard to imagine a bigger Big Data project than this one, says Ahmed-Braimah, adding that such initiatives are a dream for scientists of his ilk.

Still, the DEP project is an exercise in logistics, what with maintaining live cultures of many different species and running myriad experiments to measure their diverse traits. One of Syeds jobs is to organize the activities of a small fleet of undergraduates. (Some 30 biology majors have logged more than a thousand hours on the project over the past four years.) Hes also responsible for providing individualized training in fluorescence microscopy and morphometric analysis, the latter of which is used to measure the length of fly sperm.

If you want to conduct big data science, you need to be prepared to lead a diverse team of researchers, Syed says. Working with professors Pitnick, Ahmed-Braimah, Dorus and Belote on the DEP project has been a once-in-a-lifetime opportunity, turning me into a well-rounded, highly integrative biologist.

Pitnick, for one, thrives on working with young researchers. Our undergraduates are curious, insightful and creative, he says. Many of them improve our research in meaningful ways, and nearly all of them co-author multiple publications.

Case in point: Pitnick protg Amaar Asif 22 was among a handful of undergraduates who co-authored a major paper for the peer-reviewed Cells. The lead author was Pitnick, who, while measuring fly sperm, uncovered a novel developmental mechanism enabling flies with small testes to produce unusually long sperm. For Asif, the chance to contribute to such a discovery was transformative.

Our undergraduates are curious, insightful and creative. Many of them improve our research in meaningful ways, and nearly all of them co-author multiple publications.

Theres so much we dont know about this mechanism, and there are very few science papers to reference it, says Asif, who earned bachelors degrees in biology and neuroscience. Its uncharted territory.

An ongoing priority for the CRE is to understand the evolutionary link between sperm and FRT length. Pitnick laid the foundation in the 1990s, when he found that sperm in some species of Drosophila can grow up to two and half inches in length20 times longer than the fly itself and a thousand times longer than average human sperm. Pitnick also realized that as sperm became larger and fewer in number, the females got less of them per copulation. As few as a couple dozen sperm per mating, in some cases, he points out. This caused the flies to mate more often.

The takeaway here is that big, high-investment sperm have a better chance of penetrating the limited storage space of the FRT. Making giant sperm isnt easyit takes a lot of energy, Pitnick continues. Our research demonstrates that female choice and male competition require considerable investment on both sides. Thus, CRE strives to figure out how genetic and molecular mechanisms work together in an evolutionary sense.

The CRE helps us make sense of biodiversity and our place in it, not to mention the problems facing humanity, like disease and climate change.

Of course, tasks that used to take years to complete, like assembling an organisms genome, can now be accomplished in days or weeks. And with breakthrough studies of cellular and molecular mechanisms, scientists like Ahmed-Braimah can interrogate trait evolution with unrivaled speed and clarity. His current research into the changes that female Drosophila undergo after matingchanges that influence feeding behavior, metabolism, immune function and egg productionis incumbent on a slew of materials and methods.

Because functional genomics research provides a vast readout of cellular and molecular processes, we can access an immense amount of information. This helps us develop testable hypotheses more quickly, says Ahmed-Braimah, a computational and evolutionary genomicist.

Pitnick agrees, lauding the incredible variation that stems from natural and sexual selection in terms of the traits themselves, their underlying genetics and their interactions. This variation helps us make sense of biodiversity and our place in it, not to mention the problems facing humanity, like disease and climate change, he concludes. Perhaps part of the solution is found in a fly buzzing around your overripened fruit.

This story was published on September 20, 2022.

Read this article:

Sex, Selection and Biodiversity - Syracuse.edu - Syracuse University

Read More..