Page 440«..1020..439440441442..450460..»

Machine Learning and MRI Scans: A Glimpse into the Future of Psychosis Prediction – Medriva

Machine Learning and MRI Scans: A Glimpse into the Future of Psychosis Prediction

Psychosis, a severe mental disorder characterised by a disconnection from reality, has until recently been notoriously difficult to predict. However, recent advancements in the field of machine learning and medical imaging have paved the way for a revolutionary tool capable of predicting the onset of psychosis with startling accuracy. This tool utilises machine learning algorithms to analyse MRI brain scans, classifying them into healthy individuals and those at risk of experiencing a psychotic episode. The research, published in the esteemed journal Molecular Psychiatry, has demonstrated an impressive 85% accuracy rate in differentiating individuals at risk from those not at risk and a 73% accuracy rate when presented with new data.

The tools prediction power holds immense potential for early intervention, which has been proven to significantly improve outcomes for those at risk of psychosis. By identifying individuals at high risk before the onset of psychosis, particularly during critical periods such as adolescence and early adulthood, this tool can facilitate timely and targeted interventions. The positive impact on the individuals mental health can be enormous, preventing the full-blown manifestation of psychosis and reducing the potential for long-term psychiatric issues.

In a related study shared on ScienceDirect, a new radiotracer, 18F VAT, was used in a positron emission tomography (PET) study to measure the vesicular acetylcholine transporter (VAChT) in patients with schizophrenia. The study found a positive correlation between psychosis symptom severity and VAChT in multiple regions of interest. This underscores the potential of VAChT as a target for detecting and characterising clinical pathology and further illuminates the complex relationships between various neurotransmitters in the brain.

Further research published in Nature has investigated the relationship between trauma-related intrusive memories (TR IMs) and the anterior and posterior hippocampi morphology in PTSD. The findings suggested that a higher frequency of TR IMs in individuals with PTSD is associated with lower structural covariance between the anterior hippocampus and other brain regions involved in autobiographical memory. This sheds further light on the neural correlates underlying this core symptom of PTSD.

Another research paper, shared on MDPI, delved into the predictive capabilities of blood-based biomarkers to quantify traumatic brain injury (TBI). Notably, the paper stressed the importance of understanding the protein biomarker structure and other physical properties, as well as the kinetics of biomarkers in TBI and related conditions like PTSD and chronic traumatic encephalopathy (CTE). Given the potential of biomarkers for diagnosis and discovery of new biomarkers, such research is essential in advancing our understanding of TBI and related conditions.

The development of a machine-learning tool capable of predicting psychosis onset from MRI scans represents a significant leap forward in mental health research and care. As the research team continues to refine the classifier for use in routine clinical settings, the hope is that this tool will become a standard part of psychosis prevention strategies, enabling healthcare providers to intervene early and provide effective, targeted care to those at risk. The integration of machine learning tools in medical imaging analysis is undoubtedly a promising development in this challenging field.

More here:
Machine Learning and MRI Scans: A Glimpse into the Future of Psychosis Prediction - Medriva

Read More..

Machine learning means we can read a 2000-year-old carbonized scroll buried by Mount Vesuvius and turns out it’s all … – PC Gamer

You hear that? That's the sound of people everywhere saying 'they can do that?' after reading how researchers used imaging, computer vision, and machine learning to read ancient Roman text within what most would think amounts to little more than a stick of charcoal by looking at it.

The 2000-year-old scroll is one of more than 800 discovered in the ruins of Herculaneum, a Roman city buried by the eruption of Mount Vesuvius in AD 79. Carbonised by the eruption, each scroll is incredibly fragile, and any attempt to open one up to see if it's in any way legible has ended in a crumbling mess. Yet researchers have been keen to unlock the secrets stored in these scrolls, and it looks like machine learning might have proven the key to doing so.

Firstly, the scroll read by researchers was unwrapped, virtually. This complex process involves scanning the scroll, which took place at a particle accelerator near Oxford, UK. Then, these crumpled layers were unfurled into what amounts to flat layers of papyrusstill virtually, of course.

"The X-ray photos are turned into a 3D volume of voxels using tomographic reconstruction algorithms, resulting in a stack of slice images," the Vesuvius Challenge webpage says.

Then a step called Ink Detection was carried out on the unfurled layers, and this uses a machine learning model to identify the inked regions of the papyrus.

Various teams of researchers have then been trying to extract text from the scrolls as a part of something called the Vesuvius Challenge. Each team used various methods, models, and improvements to the previously developed methods to try and uncover more of what these scrolls are hiding. Each team then had until midnight on January 1, 2024 to submit their results to enter the challenge before a team of "eminent papyrologists" reviewed each entry to verify the results. The winning team won $700,000.

The winning teamcomprised of Youssef Nader, Luke Farritor, and Julian Schillingerentered a submission that was deemed by all the judges as being the most readable of the lot. Here's how they did it, according to a non-eminent, non-papyrologist (me).

The team's approach built on a previous discovery of a crackle pattern, which was discovered by Casey Handmer last year. This was later developed on by Luke Farritor, who used a GTX 1070 as a part of his more recent prize-winning efforts. Other teams of researchers, including another member of the winning team, Youssef Nader, had also built excellent machine learning models for detecting ink using fragments of the scrolls that had fallen off, though these only appeared to work well on fragments, not rolled up papyrus.

The winning team used the combined knowledge of these approaches to produce the clearest results with the rolled up scroll remains.

"The submission contains results from three different model architectures, each supporting the findings of the others, with the strongest images often coming from a TimeSformer-based model In addition to unparalleled ink detection, the winning submission contained the strongest auto-segmentation approach we have seen to date."

Around 5% of the first scroll has now been read as a result of all the work from all the teams of researchers working on this crumbly subject. And that's why the 2024 Vesuvius Challenge Grand Prize has now been announced, with the ambitious aim of going from the 5% now known up to 90% of all four scrolls that have been scanned.

So, there's definitely work to be done to understand more of it, but even today the researchers have some idea of what it says: "as too in the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant."

Yep, it's all about the pleasures of life, or what's known as Epicureanism. Namely, for this section at least, the pleasures of stuffing your face.

See the article here:
Machine learning means we can read a 2000-year-old carbonized scroll buried by Mount Vesuvius and turns out it's all ... - PC Gamer

Read More..

Assosia and Simpson Associates Team Up to Unlock Product Matching Potential with Machine Learning – AiThority

Data Transformation specialists, Simpson Associates, announced a collaboration with Assosia, renowned global retail research and quality assurance specialists, to explore the power of Machine Learning (ML) for product matching in a new Proof of Concept (POC) project. This partnership aims to revolutionise the way Assosia manages and utilises its data, unlocking hidden insights and streamlining crucial tasks.

Data is our most valuable asset,said Rob Davis, Director at Assosia.By collaborating with Simpson Associates and their cutting-edge Machine Learning expertise, we are unlocking new possibilities for product matching within our organisation. This Proof Of Concept has the potential to significantly improve our efficiency, accuracy, and overall data-driven decision-making.

Recommended AI News: Voyager Space and Palantir Join Forces to Advance National Security Capabilities in Commercial Space

The Assosia-Simpson Associates POC explores the potential of ML to revolutionise product matching. By creating, training, and fine-tuning multiple models based on specific datasets, this project aims to unlock hidden connections and deliver tangible benefits, such as:

Recommended AI News: ObvioHealth and Oracle Life Sciences Expand Partnership Globally

We are thrilled to partner with Assosia on this transformative project,said Nick Evans, Account Director, at Simpson Associates. Our combined expertise in data science and ML will empower Assosia to unlock the hidden potential within their data, paving the way for a future of optimised operations and data-driven success.

Recommended AI News: Space Nation Online Boosts Its Web3 Universe with AI Innovations

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

See the original post:
Assosia and Simpson Associates Team Up to Unlock Product Matching Potential with Machine Learning - AiThority

Read More..

End to End Machine Learning project implementation (Part 3) | by Abhinaba Banerjee | Feb, 2024 – DataDrivenInvestor

Photo by Emile Perron on Unsplash

Create a prediction pipeline using a Flask web app, project deployment in the AWS cloud using the CI-CD pipeline

Introduction

This part is a continuation of Part 2 where we did Data Ingestion, Data Transformation, Model Training, and Model Evaluation, Model Hyperparameter Tuning. In this part, we will develop a Flask web application, deploy the project in the AWS cloud using the CI-CD pipeline.

We need to know some Flask before proceeding with building the application. Also, create the templates folder in the main project structure.

Create 2 files inside the templates folder: home.html and index.html.

These 2 files are responsible for the frontend design of the application.

Create the predict_pipeline.py inside the pipeline folder that is inside the src folder.

Explain the app.py and predict_pipeline.py in detail.

Run app.py on the terminal (of course inside the conda environment)

Open a new window and run on the search bar.

Now add /predictdata (from app.py) to http://127.0.0.1:5000/

Looks like this

First, create the folder .ebextensions and create the file python.config inside the folder. This instance is created to deploy the project on Amazon Web Services (AWS) Elastic BeanTalk (EBS). Then create the python file application.py to deploy the project. Copy the contents of app.py and paste in application.py. Then commit into GitHub using the previous steps.

Better to use application.py instead of app.py to avoid dependency issues.

The contents of python.config

We are using the elasticbeanstalk AWS (to deploy the application) instance and the Python container.

For AWS deployment, we need an AWS account.

Go to the search bar and search Elastic Beanstalk

We will use a Codepipeline to integrate the GitHub we updated earlier with the AWS Elastic BeanStalk.

Then search for Codepipeline in the AWS search console for Continous Delivery.

Then click on Create pipeline

In the Add Source stage, use GitHub version 1 as the source, so it will integrate with your GitHub repository for this project

Next, skip the add build stage

For the add deploy stage, use AWS Beanstalk as the deploy provider, the other options are application name and environment name

The codepipeline will ultimately integrate with the created EBS application and the final deployment will be done.

The main objective of this blog was to understand the steps of deploying the project on Flask and AWS using CICD pipelines. This is the first time I am using AWS to deploy an end-to-end Machine Learning or Deep Learning project.

Hope you like the end-to-end series of learning and showcasing. I will keep doing end-to-end deployments this year and hope to make valueable projects.

GitHub Repository of the project

This marks the end of the blog.

Stay tuned for the next part and follow me here and say hi.

Twitter

References

Read more here:
End to End Machine Learning project implementation (Part 3) | by Abhinaba Banerjee | Feb, 2024 - DataDrivenInvestor

Read More..

AI and Robotics Transforming The Long Beach Automatic Terminal – Medriva

AI and Robotics Transforming The Long Beach Automatic Terminal

The emergence of advanced technology in the form of artificial intelligence (AI), machine learning, and robotics in the transport industry is a significant development. One such innovation is taking place at the Long Beach Automatic Terminal in California, where autonomous and electric robots are being employed to transport containers. This revolutionary approach aims at enhancing efficiency and sustainability in container transport, thereby manifesting the potential of AI, machine learning, and robotics in transforming the transportation industry.

As reported by the Cardinal Health ESG report, these robots are designed to move containers from ships to storage yards autonomously, eliminating the necessity for human intervention. This breakthrough aims not only to improve efficiency but also to reduce labor costs at the port. The use of these autonomous and electric robots aligns with the growing focus on tech for good, emphasizing the positive impact of technology on society and the environment.

With the advent of increased automation, there are potential implications on port efficiency, job quality, and local harbor communities. A report published by the UCLA Labor Center on the automation and the future of dock work at the San Pedro Bay Port Complex, which includes the Ports of Los Angeles and Long Beach, revealed these concerns. The report found that industry stakeholders questioned the immediate benefits of increased automation, identifying potential negative impacts and championing alternative solutions.

Key findings include the need for holistic solutions that address cargo flow across the entire supply chain while maintaining the port complexs stability as an economic, social, and cultural anchor for multiple generations of San Pedro Bay communities. This highlights the crucial balance between adopting advanced technology for improved efficiency and ensuring the well-being of the communities involved.

Undoubtedly, the adoption of AI, machine learning, and robotics in the transport industry is a technological leap. However, the imperative lies in striking a balance between leveraging technology for improved efficiency and ensuring sustainability. The use of autonomous and electric robots at the Long Beach Automatic Terminal signifies a step towards this direction. As we move forward, the focus should be on tech for good, where technology is used to positively impact society and the environment.

See the original post:
AI and Robotics Transforming The Long Beach Automatic Terminal - Medriva

Read More..

Kay Firth-Butterfield On Harnessing AI’s Power Responsibly – TIME

Kay Firth-Butterfield has worked on the intersection between accountability and AI for over a decade and is excited about the future. Im not an AI pessimist. I believe that if we get it right, it can open so many beneficial doors, she says. But shes still cautious. After doctors diagnosed her with breast cancer last year, she was grateful they did not rely too heavily on AI, though its increasingly used to evaluate mammograms and MRIs, and even in planning treatment. While Firth-Butterfield, who is now cured, worried less about whether a machine was reading her mammogram, she noted an over-reliance on current AI models can be problematic as sometimes they present incorrect information. Her surgeons agreed, she says.

A former judge and professor, Firth-Butterfield has emerged as one of the worlds leading experts on responsible AI, shaping efforts to ensure these systems remain accountable and transparent. Last April, she ended a five-and-a-half year stint as the head of AI and Machine Learning at the World Economic Forum, where she crafted frameworks and playbooks for companies, countries and other organizations to steer responsible development and use of AI. Her work advising the U.K. and Brazil on creating such AI systems made its way into law. If you're a government and you're using artificial intelligence with your citizens, then you have to be able to explain to your citizens how it is being used, she says. In 2016, Firth-Butterfield co-founded the Responsible AI Institute, which provides tools for organizations to build safe and reliable AI systems, and she serves on a council advising the U.S. Government Accountability Office on AI matters related to science and technology, and on an advisory board for UNESCOs International Research Centre on AI.

Nowadays, she also runs Good Tech Advisoryworking with corporations, governments, NGOs and media to implement AI responsibly. That means helping set up guidelines for the use of AI-reliant technology to minimize potential harm, while maximizing benefits and ensuring legal compliance.

As CEO of Good Tech Advisory, Firth-Butterfield has been helping hospitals in the U.S. navigate AIs potential uses, including for reading medical images and determining diagnoses. Many dont have clear guidelines about how staff can use programs like ChatGPT, even as Firth-Butterfield points out these tools can often provide inaccurate information. Those companies are wrestling with some really serious responsible AI choices, she says. Doctors using AI to efficiently type notes and handle administrative tasks can allow more time for patient care. But relying on AI to come up with a diagnosis in high pressure situations could be dangerous. And if a patient becomes sicker or dies, the question of who is liable becomes an issue.

When AI is not used responsibly, people can get hurtand its disproportionately women and people of color, Firth-Butterfield notes. Biased algorithms could prevent a worker from getting hired, unfairly reject mortgage applications or make incorrect decisions about security threats based on facial recognition, for example.

At the core of Firth-Butterfields advocacy is understanding how AI impacts the most vulnerable members of society. At the WEF, she worked with UNICEF to research the use of AI with children, and organized a Smart Toy Award that urged thoughtful implementation. We are allowing our children to play with toys that are enabled by artificial intelligence but we have no understanding of what our children are learningor where their data is going, she says.

Forbidding AI from being used in toys or classrooms as a way to protect children from its potential risks isn't the answer, says Firth-Butterfield. We do need children to be using AI in education because they're going to be using it in their work. So we have to find a responsible way of allowing that interaction between machine and human, she says. But teachers need to stay in charge. We cant just give education to AI; we need to keep humans in the loop, she says. Teachers might rely on AI for back-end administration, freeing up time to focus more on helping their students.

Its crucial to pay close attention to how the systems are constructed, but Firth-Butterfield is also concerned about who gets to participate. While more than 100 million people use ChatGPT, almost 3 billion people still lack access to the internet. We are increasing the digital divide at a huge ratenot just between the Global North and the Global South but also within countries, she says. Although AI has the potential to revolutionize teaching in schools and the treatment of medical patients, much of the world may not feel its effects. We tend to sit in our ivory towers talking about how AI is going to do everything brilliantly and we dont remember that much of the world hasnt been part of the internet revolution, she says.

Our future is at stake in these decisions about how people use and rely on AI, she says: Its about whether we as humans build the society that we want.

This profile is published as a part of TIMEs TIME100 Impact Awards initiative, which recognizes leaders from across the world who are driving change in their communities and industries. The next TIME100 Impact Awards ceremony will be held on Feb. 11 in Dubai.

View original post here:
Kay Firth-Butterfield On Harnessing AI's Power Responsibly - TIME

Read More..

The Role of Artificial Intelligence in Autonomous Vehicles – Medium

Photo by Erik Mclean on Unsplash

In the ever-evolving landscape of transportation, a revolution is unfolding on the roads the era of autonomous vehicles. At the heart of this technological leap lies the driving force of Artificial Intelligence (AI), propelling cars from mere modes of transport to intelligent entities capable of navigating, deciding, and learning on their own. This article delves into the fascinating world where innovation meets the open road, exploring the indispensable role that AI plays in the development and deployment of autonomous vehicles.

The Genesis of Autonomous Vehicles:

The dream of self-driving cars has long captivated the human imagination, but its in recent years that this dream has started to materialize. Autonomous vehicles, or self-driving cars, leverage a sophisticated combination of sensors, cameras, radar, and lidar technologies to perceive and interpret their surroundings. However, it is the integration of Artificial Intelligence that transforms these vehicles from mere machines into intelligent entities capable of making real-time decisions based on complex data.

Sensory Perception and AI Fusion:

At the heart of every autonomous vehicle is a network of sensors that act as its eyes and ears. Cameras capture visual data, radar detects objects, lidar creates 3D maps, and ultrasonic sensors measure distances. These data inputs are colossal and diverse, requiring the prowess of AI algorithms to make sense of them. Artificial Intelligence processes this vast array of data in real-time, identifying objects, pedestrians, and potential hazards, while also predicting and adapting to dynamic driving conditions.

Machine Learning and Autonomy:

The real magic happens when AI steps into the realm of machine learning. Autonomous vehicles are equipped with neural networks that learn and adapt from experience, improving their decision-making abilities over time. Machine learning allows these vehicles to analyze patterns, anticipate scenarios, and continuously refine their responses to different driving situations. The more data they process, the smarter and more adept they become, paving the way for a new era of adaptive, self-improving transportation.

Enhanced Safety and Efficiency:

One of the primary goals of autonomous vehicles is to enhance road safety. AI-driven systems can react faster than human reflexes, reducing the risk of accidents caused by human error, fatigue, or distractions. Furthermore, AI optimizes traffic flow, minimizes congestion, and enhances fuel efficiency through intelligent route planning and vehicle-to-vehicle communication. The result is not just safer roads but a more streamlined and environmentally conscious transportation system.

Challenges and Ethical Considerations:

While the promise of autonomous vehicles is undeniably exciting, the road to widespread adoption is not without its challenges. Ethical considerations, such as decision-making in morally ambiguous situations, and the need for robust cybersecurity to prevent hacking are critical areas that demand careful attention. Striking a balance between innovation and ensuring the safety and ethical use of AI in autonomous vehicles remains an ongoing challenge for the automotive industry.

The Road Ahead:

As AI continues to evolve, so does the potential of autonomous vehicles. The road ahead is paved with promises of safer, more efficient transportation systems, and AI will be at the forefront of this journey. From reducing traffic accidents to transforming the urban landscape, the integration of artificial intelligence in autonomous vehicles signifies a monumental leap towards a future where driving is not just a means of transport but an intelligent, adaptive experience that redefines our relationship with the open road.

Here is the original post:
The Role of Artificial Intelligence in Autonomous Vehicles - Medium

Read More..

Predicting Psychosis: Unlocking the Power of Machine Learning – ai2.news

Cutting-edge technology is revolutionizing the field of mental health, as a recent study unveils a groundbreaking machine learning tool that can predict the onset of psychosis. By analyzing MRI brain scans, this innovative classifier can effectively differentiate between individuals who are at risk of developing psychosis and those who are not.

The study, conducted by an international consortium of researchers including experts from the University of Tokyo, examined over 2,000 participants from various global locations. Among the participants, approximately half had been identified as clinically high-risk individuals for psychosis. The classifier demonstrated impressive accuracy, correctly distinguishing between those who would later experience overt psychotic symptoms and those who would not. During the training phase, it achieved an accuracy rate of 85%, which reduced slightly to 73% when exposed to new data. The findings have been published in the esteemed journal Molecular Psychiatry.

This groundbreaking tool could prove invaluable in clinical settings, enabling early intervention in individuals at risk of psychosis. While psychosis may encompass delusions, hallucinations, and disorganized thinking, its causes are multifaceted and varied. Factors such as illness, injury, trauma, substance abuse, medication, and genetic predisposition can all contribute to its development. By identifying those at risk, clinicians can provide timely and targeted interventions, significantly improving outcomes and minimizing the negative impact on individuals lives.

Associate Professor Shinsuke Koike from the Graduate School of Arts and Sciences at the University of Tokyo emphasized the importance of this research. He highlighted that only about 30% of high-risk individuals eventually develop psychotic symptoms, leaving the remaining 70% uncertain of their fate. To better assist clinicians in their identification process, the integration of biological markers, alongside traditional symptom evaluations, becomes vital.

As the most common age for the first episode of psychosis occurs during adolescence or early adulthood, identifying young individuals in need of help can be particularly challenging. However, with the advent of this machine learning tool, healthcare professionals are empowered to proactively intervene and provide support to those most at risk. This marks a significant step forward in mental health research and treatment.

Source: Zhu et al./Molecular Psychiatry

An FAQ section:

Q: What is the groundbreaking machine learning tool mentioned in the article? A: The article discusses a machine learning tool that can predict the onset of psychosis by analyzing MRI brain scans.

Q: How accurate is the classifier in distinguishing between individuals at risk of developing psychosis and those who are not? A: During the training phase, the classifier achieved an accuracy rate of 85%. When tested with new data, it achieved a slightly reduced accuracy rate of 73%.

Q: What was the scope of the study conducted by the international consortium of researchers? A: The study examined over 2,000 participants from various global locations and focused on individuals who were clinically identified as high-risk for psychosis.

Q: How can this tool be valuable in clinical settings? A: The tool can enable early intervention in individuals at risk of psychosis, allowing clinicians to provide timely and targeted interventions to improve outcomes and minimize the negative impact on their lives.

Q: What are some factors that can contribute to the development of psychosis? A: Factors such as illness, injury, trauma, substance abuse, medication, and genetic predisposition can all contribute to the development of psychosis.

Q: What did Associate Professor Shinsuke Koike highlight about the research? A: Associate Professor Shinsuke Koike emphasized that only about 30% of high-risk individuals eventually develop psychotic symptoms, leaving the remaining 70% uncertain of their fate. He stressed the importance of integrating biological markers with traditional symptom evaluations to assist clinicians in the identification process.

Q: Why is identifying young individuals in need of help particularly challenging? A: The most common age for the first episode of psychosis occurs during adolescence or early adulthood, making it challenging to identify young individuals in need of help.

Definitions for key terms or jargon:

Psychosis: A mental health condition characterized by a loss of contact with reality, which may include delusions, hallucinations, and disorganized thinking. Machine learning: A field of artificial intelligence where computers learn and improve from experience without being explicitly programmed. MRI brain scans: Magnetic Resonance Imaging scans of the brain, which use magnetic fields and radio waves to produce detailed images of the brains structure and function. Classifier: In machine learning, a classifier is an algorithm that categorizes or assigns labels to input data based on patterns and features. Molecular Psychiatry: A peer-reviewed scientific journal that publishes research in the field of psychiatry, focusing on the molecular and genetic aspects of psychiatric disorders.

Suggested related links: University of Tokyo National Institute of Mental Health: Schizophrenia American Psychiatric Association

Excerpt from:
Predicting Psychosis: Unlocking the Power of Machine Learning - ai2.news

Read More..

Decoding new newsroom jobs in the age of AI – Media Makers Meet

Loathe it or like it, machines will be taking over some important jobs in the newsroom. For humans, the job landscape will change radically. Its time to get our robots in a row, reports Piet van Niekerk.

XX media company is looking for an AI Content Coordinator who will be responsible for developing and implementing content strategies that leverage artificial intelligence and machine learning technologies to create, curate, and optimise digital content. This is a brand new role that combines expertise in content creation, data analysis, and AI to enhance the overall content quality and user experience.

The above vacancy, advertised in the first week of February 2024, is one of many going up on job sites branded as new roles related to the use of AI. In this case, it is specific to media. The ideal candidate will have a Bachelors degree in Marketing, Communications, Computer Science, Journalism, or a related field. Skills in Python and SQL (both programming languages) are preferred, as well as a strong interest and understanding of AI and machine learning technologies and their applications in content creation and distribution.

Should you land this job, you wont be alone. Similar jobs related to managing Large Language Models (LLMs) at media companies are exploding online. Some are AI Tone-of-voice Editors to streamline the process of refining written communication, making it more effective, coherent, and in line with the desired tone or brand identity. Others include Head of AI and Media, Editorial AI project coordinator, and AI-assisted reporter for an NCTJ-qualified reporter to expand AI technology across newsrooms in West Yorkshire, using AI technology to create national, local, and hyper-local content.

The New York Times recently hired an editorial director of artificial intelligence initiatives to establish protocols for the newsrooms use of AI and examine ways to integrate the technology into its journalism.

You get the picture. On the one hand, droves are being hired to manage AI, while others continue to debate how many jobs will be lost because of AI.

A white paper by the World Economic Forum (WEF), titled Jobs of Tomorrow: Large Language Models and Jobs and published towards the end of last year, is probably the most comprehensive analysis of this thorny subject. The report provides a structured analysis of the potential direct, near-term impacts of LLMs on jobs. It used as its base reference a study that concluded that 62% of total work time involves language-based tasks and found that the widespread adoption of LLMs, such as ChatGPT, could significantly impact that broad spectrum of job roles.

The paper is based on an analysis of over 19 000 individual tasks across 867 occupations, assessing the potential exposure of each task to LLM adoption, classifying them as tasks that have:

high potential for automation;

high potential for augmentation;

low potential for either; or

are unaffected (non-language tasks).

The paper also provides an overview of new roles that are emerging due to the adoption of LLMs.

The two industries with the highest estimates of total potential exposure to automation and augmentation measures are both segments of financial services:

These sectors are followed by information technology and digital communications, and then media entertainment and sports. Specific to media and publishing, the paper predicts 23% of functions can be automated, 33% augmented, 23% have a low potential for automation or augmentation, and 21% consist of non-language tasks.

There is no attempt in the white paper to link these percentages to job losses. Instead, the paper concludes that as generative AI introduces a new paradigm of collaboration between humans and AI, it will redefine how work is done. This will reshape the nature of various job roles.

It is within this new paradigm of collaboration that there is room for job development in several key areas, which include:

While these five key areas presented in the WEF paper are general to the wider job market, the current AI media jobs posted to job boards do not seem to relate to any specific development area. The most popular post advertised, Editorial AI project coordinator, seemingly covers all of the above. Even the folks at The Times are vague about the specifics of their new Newsrooms Editorial Director of AI Initiatives, saying this individual will work with newsroom leadership to establish principles for how we do and do not use generative AI.

Drawing on 30 years of newsroom experience, I have done my own analysis to pinpoint a few roles, pairing them with the key development areas highlighted by the WEF. Heres a starter for ten:

Potential roles:

AI features assistant. An experienced journalist who collaborates with AI systems to generate creative storytelling ideas, combining human creativity with machine-generated content.

AI content editor. An experienced editor who curates content generated by AI systems, ensuring accuracy, quality, target audience relevance, and adherence to editorial standards.

AI narrative strategist. A prompt engineer who creates a cohesive prompt strategy for generating engaging AI-driven narratives.

UX designer for AI. An experienced graphic designer assisting in the design of user interfaces to ensure AI-driven content on all the company platforms contributes to the user experience and enhances content consumption and interaction.

AI visual designer who combines graphic design expertise with AI capabilities to create visual and interactive storytelling.

AI interface designer who creates designs to assist platform users to seamlessly interact with AI-driven content.

AI-assisted reporter. A journalist whose task is to harness the power of language models to generate relevant news articles, reports and analyses, and interview transcripts, and create content variants for diverse distribution platforms. The roles main focus is to speed up efficiency and creativity.

AI language editor, or AI tone-of-voice editor: An AI subeditor to refine and enhance the language models used in content creation by ensuring they align with the newsrooms editorial standards and style.

AI innovations editor. An experienced editor who collaborates on developing AI systems to find new ways of storytelling and new ways to engage with content.

Data journalist. A journalist skilled in interpreting and analysing data generated by AI tools, providing insights and trends that inform editorial decisions and content creation.

AI tool trainer. An AI expert responsible for training and fine-tuning AI models specific to a publishers needs to ensure high performance and relevance to the target market.

AI content strategist. An AI editor responsible for using AI to develop content strategies, recommending topics, angles, or supplementary content based on audience preferences and trending themes. Ensures alignment with audience preferences and optimises AI for engagement.

AI ethics editor, who ensures responsible and ethical use of AI technologies in content creation and distribution.

AI compliance officer. An editor who monitors AI systems for compliance with ethical standards, addressing potential biases and concerns.

AI governance strategist. An AI strategies specialist on the editorial team who develops strategies and protocols for the ethical use of AI, aligning with worldwide industry standards.

If we adopt these roles and their subsequent skill sets become part and parcel of the modern newsroom, the logical question is: where will the fresh talent be found for these jobs?

David Caswell, an application architect for AI in news products and workflows, predicts that the most valued skill in an AI-empowered news organisation will likely be the same as in traditionally configured news organisations. Editorial judgement: the ability to maintain a keen awareness of the deep informational needs of an audience or society, identify stories that meet those deep needs, verify and contextualise those stories, and then communicate them to audiences in clear and engaging forms will probably remain the foundation of journalism.

What was that about the more things change?

View original post here:
Decoding new newsroom jobs in the age of AI - Media Makers Meet

Read More..

Quantum computer outperformed by new traditional computing – Earth.com

Quantum computing has long been celebrated for its potential to surpass traditional computing in terms of speed and memory efficiency. This innovative technology promises to revolutionize our ability to predict physical phenomena that were once deemed impossible to forecast.

The essence of quantum computing lies in its use of quantum bits, or qubits, which, unlike the binary digits of classical computers, can represent values anywhere between 0 and 1.

This fundamental difference allows quantum computers to process and store information in a way that could vastly outpace their classical counterparts under certain conditions.

However, the journey of quantum computing is not without its challenges. Quantum systems are inherently delicate, often struggling with information loss, a hurdle classical systems do not face.

Additionally, converting quantum information into a classical format, a necessary step for practical applications, presents its own set of difficulties.

Contrary to initial expectations, classical computers have been shown to emulate quantum computing processes more efficiently than previously believed, thanks to innovative algorithmic strategies.

Recent research has demonstrated that with a clever approach, classical computing can not only match but exceed the performance of cutting-edge quantum machines.

The key to this breakthrough lies in an algorithm that selectively maintains quantum information, retaining just enough to accurately predict outcomes.

This work underscores the myriad of possibilities for enhancing computation, integrating both classical and quantum methodologies, explains Dries Sels, an Assistant Professor in the Department of Physics at New York University and co-author of the study.

Sels emphasizes the difficulty of securing a quantum advantage given the susceptibility of quantum computers to errors.

Moreover, our work highlights how difficult it is to achieve quantum advantage with an error-prone quantum computer, Sels emphasized.

The research team, including collaborators from the Simons Foundation, explored optimizing classical computing by focusing on tensor networks.

These networks, which effectively represent qubit interactions, have traditionally been challenging to manage.

Recent advancements, however, have facilitated the optimization of these networks using techniques adapted from statistical inference, thereby enhancing computational efficiency.

The analogy of compressing an image into a JPEG format, as noted by Joseph Tindall of the Flatiron Institute and project lead, offers a clear comparison.

Just as image compression reduces file size with minimal quality loss, selecting various structures for the tensor network enables different forms of computational compression, optimizing the way information is stored and processed.

Tindalls team is optimistic about the future, developing versatile tools for handling diverse tensor networks.

Choosing different structures for the tensor network corresponds to choosing different forms of compression, like different formats for your image, says Tindall.

We are successfully developing tools for working with a wide range of different tensor networks. This work reflects that, and we are confident that we will soon be raising the bar for quantum computing even further.

In summary, this brilliant work highlights the complexity of achieving quantum superiority and showcases the untapped potential of classical computing.

By reimagining classical algorithms, scientists are challenging the boundaries of computing and opening new pathways for technological advancement, blending the strengths of both classical and quantum approaches in the quest for computational excellence.

As discussed above, quantum computing represents a revolutionary leap in computational capabilities, harnessing the peculiar principles of quantum mechanics to process information in fundamentally new ways.

Unlike traditional computers, which use bits as the smallest unit of data, quantum computers use quantum bits or qubits. These qubits can exist in multiple states simultaneously, thanks to the quantum phenomena of superposition and entanglement.

At the heart of quantum computing lies the qubit. Unlike a classical bit, which can be either 0 or 1, a qubit can be in a state of 0, 1, or both 0 and 1 simultaneously.

This capability allows quantum computers to perform many calculations at once, providing the potential to solve certain types of problems much more efficiently than classical computers.

The power of quantum computing scales exponentially with the number of qubits, making the technology incredibly potent even with a relatively small number of qubits.

Quantum supremacy is a milestone in the field, referring to the point at which a quantum computer can perform a calculation that is practically impossible for a classical computer to execute within a reasonable timeframe.

Achieving quantum supremacy demonstrates the potential of quantum computers to tackle problems beyond the reach of classical computing, such as simulating quantum physical processes, optimizing large systems, and more.

The implications of quantum computing are vast and varied, touching upon numerous fields. In cryptography, quantum computers pose a threat to traditional encryption methods but also offer new quantum-resistant algorithms.

In drug discovery and material science, they can simulate molecular structures with high precision, accelerating the development of new medications and materials.

Furthermore, quantum computing holds the promise of optimizing complex systems, from logistics and supply chains to climate models, potentially leading to breakthroughs in how we address global challenges.

Despite the exciting potential, quantum computing faces significant technical hurdles, including error rates and qubit stability.

Researchers are actively exploring various approaches to quantum computing, such as superconducting qubits, trapped ions, and topological qubits, each with its own set of challenges and advantages.

As the field progresses, the collaboration between academia, industry, and governments continues to grow, driving innovation and overcoming obstacles.

The journey toward practical and widely accessible quantum computing is complex and uncertain, but the potential rewards make it one of the most thrilling areas of modern science and technology.

Quantum computing stands at the frontier of a new era in computing, promising to redefine what is computationally possible.

As researchers work to scale up quantum systems and solve the challenges ahead, the future of quantum computing shines with the possibility of solving some of humanitys most enduring problems.

The full study was published by PRX Quantum.

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

See more here:

Quantum computer outperformed by new traditional computing - Earth.com

Read More..