Page 827«..1020..826827828829..840850..»

Can Artificial Intelligence Make the Travel Industry Sustainable? – Impakter

Sustainability is no longer a passing trend; its a global imperative. The travel industry, in particular, is experiencing a surge in demand for sustainable options. According to the Booking.com 2023 Sustainable Travel Report, three-quarters of global consumers now seek more environmentally friendly travel choices, marking an 8 percent increase from the previous year.

Moreover, nearly half of travelers are willing to pay extra to reduce their carbon footprint while journeying. Despite these growing eco-friendly aspirations, many travelers struggle to find sustainable options, with just over half believing that such choices are limited, and 44 percent uncertain about where to locate them.

Today, hotels spend an estimated $8 billion annually on sustainability management, primarily due to cumbersome, fragmented, and predominantly manual processes. These antiquated data collection methods, which rely on tools such as email, Excel, and online surveys, pervade the sustainability data ecosystem. This extends from how hotels transmit their information to third-party green certification bodies and various sales channels to how they gather data from diverse sources like suppliers or various operators within hotel chains.

The result? Hotels are struggling to meet sustainability targets, and global carbon emissions from the hotel sector are projected to rise. According to the Sustainable Hospitality Alliance, hotels must reduce their carbon emissions by 66 percent per room by 2030 and by 90 percent by 2050 to avert further environmental damage. However, without reliable, easily interpretable data, stakeholders lack the means to make significant, positive changes. They cannot benchmark progress, set goals, or effectively communicate successes to customers. Manual data collection, processing, and analysis demand human resources that many hotels lack, and even if available, the staff might not possess the required training.

The business case for technology-backed sustainability management is compelling. Hotels with eco-credentials attract four times more guests compared to those lacking sustainability certifications. Taking into account guests willingness to pay more for eco-friendly accommodations, it is estimated that poor sustainability management costs the industry $21 billion, including $13 billion in missed revenue.

What does an effective tech-supported sustainability management system look like? First and foremost, it should serve as a central hub for the collection of all sustainability data. This platform should be intuitive, scalable, and equipped with AI capabilities. It should cater to staff at the property level as well as higher-ups responsible for sustainability oversight across the organization.

Secondly, it should seamlessly integrate with third parties such as regulatory bodies and sales channels through plug-and-play APIs, facilitating the smooth transmission of sustainability data. Platforms like Booking.com, for example, enable consumers searching for hotels to filter accommodations based on their sustainability score, from level 1 to 3+. Specific actions taken by a hotel to earn its sustainability score, such as the elimination of single-use plastics and the installation of electric car charging stations, as well as any certifications earned (e.g., Green Key), should be readily accessible. A hotel that automatically communicates its sustainability data to sales channels, as opposed to sporadic data dumps, can leverage sustainability metrics as a key differentiator and showcase progress over time.

Taking it a step further, an ideal platform should automate sustainability data collection by integrating with tools like smart meters. This approach reduces the risk of human error and provides real-time, transparent insights into energy consumption and waste management. Armed with accurate, real-time data, an AI-enabled platform can offer automatic sustainability improvement suggestions across various parameters, enhancing operations, reducing water consumption, and meeting certification requirements. This, in turn, frees up valuable human resources for more strategic tasks.

Recognizing that in-house development of such technology and expertise is often unsustainable, especially in light of labor shortages, many hotel brands are seeking partners with easy-to-implement solutions to streamline sustainability reporting and enhance efficiency. However, its crucial for brands to choose partners with proven experience in sustainability management within the hospitality sector. A partner specializing in the hotel industry is uniquely positioned to understand the industrys nuances and challenges, enabling them to create tailored solutions that address pain points. This approach empowers brands to implement more ambitious, eco-friendly policies that save money, boost revenue, and enhance their competitive advantage.

In conclusion, sustainability management and Environmental, Social, and Governance (ESG) reporting have evolved beyond mere compliance requirements for companies. They have become essential drivers of business success. As both investors and consumers increasingly demand transparency and accountability from companies, sustainability has become a critical component of overall business strategy. By adopting a proactive stance, companies can leverage sustainability and ESG reporting to gain a competitive edge in the marketplace.

Editors Note:The opinions expressed here by the authors are their own. Not those of Impakter.comIn the Featured Photo: Sustainable travel. Featured Photo Credit:Unsplash.

View original post here:
Can Artificial Intelligence Make the Travel Industry Sustainable? - Impakter

Read More..

Exposed in the face of artificial intelligence – EL PAS USA

Whenever I go into doomsday mode (becoming a kind of technological Cassandra), I think about a mansplaining memo that reveals a truth thats unknown to me, a woman. It expresses itself with forceful words, reproaching me: The truth is that its not a knife that kills: a man kills, it reads. Who could defend themselves against these words? Who dares contradict the engineers of progress?

What this memo tends to forget is that, if the instrument in question doesnt have a sharp end and if it wasnt available in the stores of any neighborhood, town or district it wouldnt be suitable to kill anyone at any time. Before the authors of the memo sneeringly tell me that were not going to ban knives or erect gates around the countryside, I would like to bring up the following reflection (which probably wont interest you, nor change your mind).

The argument of guns dont kill people, people kill people is used repeatedly by the NRA (along with its extreme interpretation of the Second Amendment) to avoid the imposition of any type of gun control. In order to continue making money, this organization is capable of blaming the countrys mental health problem (which, by the way, theyre not willing to spend a penny on) rather than recognizing that the only function of a weapon is to injure or kill. A gun isnt useful when youre trying to cut a steak or open a box. Its only capable of causing thousands of deaths 31,059 deaths in the United States in 2023 so far, according to the Gun Violence Archive, a website that counts American firearm deaths in real-time.

In Europe, since we do put gates up around the countryside (thats how paranoid we are), the possession and use of firearms is strongly limited. This is because were aware, precisely, that theyre instruments meant for killing. Whats more, in Spain according to the national laws that regulate weapons an individual can only possess or carry knives that are less than four inches long and have just a single edge. Automatic and double-edged knives are prohibited; no citizen can possess knives, machetes, and other bladed weapons that are duly classified as weapons by the competent authorities.

Thanks to the cultural evolution of our laws, weve been able to prevent many people from dying, simply by limiting the availability of tools that have the capacity to kill. No one thinks of limiting the number of people capable of killing as a solution to the problem, because if that were to occur nobody would be left.

This same form of thinking should be applied to technology. There are both civilian and military uses and classification for different types of tools, just as with pharmaceutical drugs: some can only be used in healthcare settings, under the prescription and control of a doctor, while some are over-the-counter. Meanwhile, certain aspects of medicine are subject to even strict international prohibitions, such as the cloning of humans. When were able to analyze risks, were able to limit and manage them through regulations.

And then theres data, social media and internet technology things that have been utilized by many people since birth. Generations have grown up and matured around technological instruments, not recognizing any danger in them. After all, who would have frowned upon the evolution of the personal computer or microchips, which allowed man to step on the Moon? Nobody, of course. Technology is neutral, cold, dispassionate and, therefore, beneficial. At least, thats what the tech giants would have you believe. That is, the same men who are asking tech writer Douglas Ruskoff about how they can protect themselves from their own robots.

Many people have been enriched by making certain online tools available to eight-year-old boys, which teach them that sexual violence is a normal way of interacting with girls. They have created apps that allow 11-year-old kids to take pictures every 30 seconds and share them with billions of people. Theyve even created free babysitting services, in the form of the iPads that parents hand their toddlers.

The tech leaders are the ones to blame when thanks to the democratization of AI apps are used to turn innocent photos of young girls into child pornography, via digitally-generated nudes. They have given consumers total access to technologies that should never have left highly-controlled environments technologies that shouldnt be operated by simply anyone.

I could hide a snack in a nuclear briefcase and blame my dog for the extinction of humanity after he accidentally pushes a button while trying to get a hold of it. I could blame him, if I were a psychopathic billionaire, but since Im merely a lawyer, what Ill do instead is not leave anything lethal within my dogs reach. Ill work with his basic impulses, instead of blaming him for them. This is a reminder to the tech people who put out that memo: guns kill, and AI shouldnt be accessible to teenagers raised on YouPorn.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

See the rest here:
Exposed in the face of artificial intelligence - EL PAS USA

Read More..

Seeing is Believing: The Role of Artificial Intelligence in … – MD Magazine

Credit: Bruno Henrique/Pexels

The phrase "artificial intelligence" is everywhere in the public consciousness both a buzzword promising a brighter tomorrow and a curse looming over our collective heads.

The public debate concerning the ethics and validity of artificial intelligence (AI) is likely to rage on for decades, but the evolving role of AI in ophthalmology suggests its vast potential to enhance clinical care and patient outcomes. AI imaging technologies may better track ophthalmic disease development, while AI chatbots could inform the next generation of medical leaders.

However, regardless of this excitement, ophthalmologists will face significant challenges. Given the rapidly expanding scope of artificial intelligence, the specialty will need to overcome ethical concerns and issues with interpretation to safeguard patient outcomes.

We already know, just like any other technology, the first and second generations are not going to be widely used, and theyre going to have to work out their kinks, said Jonathan Jonisch, MD, partner, Vitreoretinal Consultants of New York. I think were a way away from using machine learning to guide our treatments, but I think the value of artificial intelligence imaging is here already and will just continue taking steps to move forward.

The role of AI may take the form of an added tool in the growing armamentarium of clinicians, beginning with imaging technologies. A variety of deep and machine learning models have been deployed across the specialty, being worked and reworked to improve imaging and better detect ophthalmic disease and disease progression.

When you have AI algorithms that have been trained to look at imaging, and perhaps using biomarkers that we may not see with the naked eye, theres a potential for AI to allow for decision support that may be even better than what can be done by humans, said Rajeev Muni, MD, MSc, a vitreoretinal surgeon in the department of ophthalmology at St. Michaels Hospital and Unity Health Toronto.1

Data from these studies have indicated the benefit of AI in tracking disease developments. An analysis of real-world data in China found an AI-based fundus screening system had the ability to detect 5 prevalent ocular conditions, with a particularly favorable efficacy for diabetic retinopathy, retinal vein occlusion (RVO), and pathological myopia.

Investigators described the potential benefits of the clinical application of AI including its ease of use and limited need for resources, particularly for fundus screening, and ability to collect epidemiological data.2

Multiple studies presented at the 83rd Scientific Sessions of the American Diabetes Association (ADA) focused on the translation of AI systems to detect diabetic eye diseases. This included the real-world deployment of an autonomous AI system at Johns Hopkins School of Medicine being linked to improved testing adherence for diabetic eye diseases across primary care clinics.

In particular, the deployment improved access and equity for those traditionally disadvantaged in medical care. Investigators suggested the use of AI to overcome historic disparities will not only benefit ophthalmology, but medicine as a whole.3

Another analysis found machine learning models allowed for the accurate and feasible identification of the progression of diabetic retinopathy. The AI model predicted approximately 91% of the ultra-widefield images with the correct labels, often indicating greater disease progression than human graders. These algorithms, as a result, may further refine patient risk and introduce personalized screening intervals.4

According to Jonisch, machine learning may allow for the analysis of nuanced features on images and better prediction of the disease progression. With this knowledge, ophthalmologists could determine the most beneficial therapy for patients, as well as better determine the risk of failure or chance of success in a relevant clinical trial.

Artificial intelligence and machine learning do a really good job of taking many more data points than we could analyze as a human at one time, Jonisch said. I would envision a time where machine learning can help us predict disease progression better.

Racial bias in imaging, however, could be a residual concern for ophthalmologists. Race, although sociologically a social construct, is a phenotypic feature that can affect image-based classification performance.

An AI system has the capability to be deployed at a greater scale than an individual clinician. Thus, the potential harm from these biases may be increased, particularly when introduced in demographics different from those on which the system trained.

A diagnostic study conducted at Oregon Health & Science University found AI imaging could infer self-reported race from retinal fundus images and vessel maps previously believed to not contain information relevant to race something human graders cannot do.5

As the use of AI grows in medicine, clinicians and researchers may need to place focus on strategies to mitigate AI biases, from the data collection stage to the evaluation and post-authorization deployment stage.

Recent analyses indicate the role of artificial intelligence chatbots could soon be extended from a fun novelty to a test preparation tool in ophthalmology.

New data suggest the increasing benefit of the popular AI chatbot, ChatGPT, for preparation for ophthalmology board certification. The investigative team from the University of Toronto found ChatGPT 4.0 answered 84% of multiple-choice practice questions taken from OphthoQuestions, a common practice resource for board examination, correctly in July 2023.6

Based on previous findings from the investigative team, the chatbot only answered 46% of multiple-choice questions correctly in January and improved slightly to 58% in February.7

We can see almost in real time how this AI chatbot has evolved in terms of its ophthalmic knowledge and the gains in the performance of the chatbot, were seeing in virtually every subspecialty area of ophthalmology, from cornea to glaucoma and retina, said Marko M. Popovic, MD, MPH, a resident physician in the department of ophthalmology and vision sciences at the University of Toronto, and one of the study investigators.1

While there remains work to be done, Popovic suggested the dramatic advances in the capability of ChatGPT in preparing for board certification in a short period of time lend credence to its future potential.

Another analysis suggested large language models provide appropriate ophthalmic advice to patient questions. Investigators from Stanford University noted, in particular, that the generated AI answers did not significantly differ from an ophthalmologist regarding incorrect information or the likelihood of harm.8

However, there are notable limitations to a chatbots benefit, stemming from its capability for "hallucinations," when a largelanguage model responds with incorrect information or facts that aren't based on real data.9

In a study from the New England Eye Center, a large language model-based platform provided largely inaccurate information on questions regarding vitreoretinal diseases, including age-related macular degeneration (AMD), diabetic retinopathy, and retinal vein occlusion, with inconsistencies on repeat inquiries. Exactly half (50.0%) of answers were materially different, even after no functional changes were made to the platform between the first and second question submissions.10

Popovic suggested that inaccurate or incomplete responses could lead to suboptimal care and issues down the line, particularly if young physicians rely on answers from ChatGPT in the beginning stages of their careers.

I think the bottom line here is at the end of the day, in the diagnosis and treatment of patients, the AI chatbot cannot be held accountable for what it provides, he said.1 And thats particularly challenging in the situation where you ask ChatGPT what the symptoms of X condition are, and it provides 10 symptoms and only 9 of them are correct.

As disease patterns and treatment responses may be recognized much faster with the use of AI tools, these tools could mark a forward leap for the field. Jay Duker, MD, the president, and chief executive officer of Eyepoint Pharmaceuticals, believes relying on AI for certain abilities may not be the worst thing for the specialty.

Ive been saying for a while to young residents that in 10 years, were not going to be diagnosticians, Duker said.11 The optical coherence tomography is going to tell you what the patient has, and everyone says Oh, that wont be fun anymore. It will, because now were going to concentrate on the patient, instead of concentrating on what they have, and were going to connect with them at a more personal level.

Still, these specialists indicate more data is required for validation and the specialty should be cautious when implementing artificial intelligence into full-time clinical care. There is also a creeping, and understandable, fear of a future where AI replaces human intuition with ones and zeroes.

But it may be important to remember what these machines can and cannot do. In conjunction with a specialists expertise, an AI system could improve patient outcomes without sacrificing the Hippocratic oath to do no harm.

I think when these technologies are initially rolled out, we dont need to fully trust it, Jonisch said. It can be used in addition to our current therapy, not instead of. That is how a lot of areas of medicine incorporate newer technologies, you dont initially fully rely on them. You do it in conjunction with what youre already doing.

References

Go here to read the rest:
Seeing is Believing: The Role of Artificial Intelligence in ... - MD Magazine

Read More..

$3 Million Grant Awarded to Develop Artificial Intelligence to Help … – Lupus Foundation of America

Recently, University of Houston researchers were awarded $3 million by the National Institute of Diabetes and Digestive and Kidney Diseases to develop an artificial intelligence (AI) system that will read and classify kidney biopsy results to more accurately diagnose lupus nephritis (LN, lupus-related kidney disease).

Diagnosing LN can be challenging. It typically requires a kidney biopsy, a painful and invasive procedure where a small piece of kidney tissue is collected and examined for signs of inflammation and damage. A pathologist then reads the biopsy report, but there are often differing interpretations of the results based on who reads the report. This research project aims to automate the classification of biopsy samples to aid in diagnostic accuracy. According to the researchers, using AI to train a neural network to learn how to read and classify the biopsies will lead to higher accuracy and translate to better treatment of LN.

The use of AI is novel in lupus study. Its ability to detect and select patterns can make the technology useful for classifying disease, which could revolutionize diagnosis, treatment, and management of LN. Continue to follow the Lupus Foundation of American for developments stemming from this grant, as well as other news on lupus treatments and clinical trials.

Read the announcement

Read more from the original source:
$3 Million Grant Awarded to Develop Artificial Intelligence to Help ... - Lupus Foundation of America

Read More..

Integrated Intelligence: Human Uses of, Strategies on, and Rules for … – Newlines Institute

Executive Summary

Humans have entered an age of artificial intelligence or, rather, of integrated intelligence. Already becoming more familiar with some forms of artificial intelligence in their daily lives, theyll inevitably embrace new technologies and techniques in everything from workplace productivity systems to drug design, manufacturing defect detection, and autonomous weapons. Given tiered societies and the complexity of consequences, American and other leaders must avoid trapping themselves in poor policies and practices. Rather than reacting counterproductively, they must strive for the sweet spot between important and urgent, innovative and responsible, private and public. Because they wont soon be able to resolve substantial uncertainty regarding how strongly or how rapidly people will experience the effects of artificial intelligence, American and other policymakers must get curious, be active, and prepare for a range of potential outcomes. They must work on all fronts, from domestic legislation and international coordination to enterprise policies and personal practices, while accepting that they cant control the future.

In this special report, the Future Frontiers team at New Lines Institute considers and proposes human uses of, strategies on, and rules for artificial intelligence in the 21st century. To do so, we summarize how humans have mythologized, theorized, and made machines since antiquity; explain how scientists and engineers have developed contemporary artificial intelligence during the industrial age, especially after World War II; provide an overview of artificial intelligences complex consequences in the age of adoption; offer ideas on how American and other leaders may create strategies, policies, and laws on the technology; and consider whether and how people in every segment of society may adopt standards and practices in the coming age of integrated intelligence.

View post:
Integrated Intelligence: Human Uses of, Strategies on, and Rules for ... - Newlines Institute

Read More..

The Class Action Weekly Wire Episode 31: Artificial Intelligence … – Duane Morris

Duane Morris Takeaway: This weeks episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and special counsel Brandon Spurlock with their discussion of the Senate Banking Committees hearing this week regarding consumer protection in the financial sector from the risks of artificial intelligence, as well as their analysis of the potential implications in the regulatory environment and class action space as AI continues to be utilized in workplace and commercial operations.

Episode Transcript

Jerry Maatman: Welcome, loyal blog readers and listeners to our Friday weekly podcast series. Im joined by my colleague Brandon Spurlock today, and were going to be focusing on artificial intelligence and the fact that that issue has been foremost in the mind of legislators in Washington, D.C. Brandon, welcome to our weekly podcast.

Brandon Spurlock: Thanks, Jerry. Always happy to be here.

Jerry: Brandon, there was quite a lot of activity at the Senate Banking Committee this week with respect to artificial intelligence. It involved consumers and protection of consumers. To me AI is everywhere and in the news, in terms of how it impacts the workplace, how it impacts consumers whats your take away from what occurred in Washington, D.C. this week?

Brandon: Yeah, Jerry, this topic is exploding everywhere, and the changes in every sector are fast and furious is AI advances. The hearing was led by the committees chairman, Democratic Senator Sherrod Brown from Ohio. Brown opened the hearing by highlighting positive aspects of technology for society in the financial world. And you think about things like ATM machines providing quick access to money, smartphone apps that can access banking online and bill paying, but also explain that automation has led to many of the financial crises that weve seen in the past two decades. Brown stressed that any AI use in the financial sector should be utilized to make the economy better for consumers, and that there should be significant safeguards in place to ensure that it does so.

His Republican counterpart, Senator Mike Rounds of South Dakota, who was filling in for the committees ranking member, also stressed the risks of AI, but took a different stance on the issues of regulation. He stated that there should be regulations regarding transparency and explainability in decision-making, especially where credit is involved but that Congress should take a pro-innovative stance so the U.S. can attract talent, and that halting the progress of AI in the financial sector could put the U.S. at a competitive disadvantage.

Jerry: It struck me that here is a great example of technology accelerating faster than the law, and the law is trying to catch up, and government regulators are thinking about the void that exists in the system about regulation. I know that the Senate Committee and Senator Brown focused on fraud and antitrust concerns, but the overlay also was in the fear that artificial intelligence incorporates a bias, that use of the artificial intelligence could have an adverse impact on protected minority groups. Whats your takeaway in terms of what were going to see in the future in this particular area?

Brandon: Well, thats spot on Jerry. Brown highlighted several AI tools that companies in the financial sector already use have been shown to have ingrained discriminatory biases towards Black and Latino American borrowers. Specifically, banks use algorithms and machine learning AI models and consumer lending that can determine a borrowers creditworthiness. But it often automates, super charges the biases that end up excluding minorities.

Jerry: I know that the Consumer Financial Protection Bureau is dabbling in this area, also focusing on regulations. But it seems to me that this is an area that the plaintiffs class action bar is following. And my sense is that were going to see a tipping point soon where there is going to be private plaintiff lawsuits brought over these issues with allegations that either the use of the AI implicated antitrust or fraud concerns or discrimination, either in the employment arena in the workplace, or with the extension of credit or with loans. Whats your takeaway of class action risks in this area?

Brandon: Well, you know there was a committee witness attending the hearing, Daniel Gorfine. Hes the founder and CEO of advisory firm Gattaca Horizons, and hes a former chief innovation officer with the CFTC. He noted the risk of AI, but stated the speculative fear or fear of future harm should not broadly block development of AI in financial services.

Another witness, University of Michigan computer science and engineering professor Michael Wellman, urged that public and open knowledge on what practices can create risk will help better prepare financial systems for AI and inspire market rules and systems that remain resilient to AIs inevitable impacts.

So with all this said, Jerry, there will probably be no shortage of class action lawsuits that are filed, and I think as we see how those class actions progress, well also see how they impact the regulatory environment. I think both are going to have an impact on one another.

Jerry: Brandon, youre a thought leader in this area, and well be closely following artificial intelligence and its implications in litigation and government regulation, and in terms of what it means to companies in the private sector. Sincerely appreciate you lending your expertise today to our podcast and thanks so much for joining us.

Brandon: Thanks for having me, Jerry.

Originally posted here:
The Class Action Weekly Wire Episode 31: Artificial Intelligence ... - Duane Morris

Read More..

Artificial Intelligence tools shed light on millions of proteins – Science Daily

A research team at the University of Basel and the SIB Swiss Institute of Bioinformatics uncovered a treasure trove of uncharacterised proteins. Embracing the recent deep learning revolution, they discovered hundreds of new protein families and even a novel predicted protein fold. The study has now been published in Nature.

In the past years, AlphaFold has revolutionised protein science. This Artificial Intelligence (AI) tool was trained on protein data collected by life scientists for over 50 years, and is able to predict the 3D shape of proteins with high accuracy. Its success prompted the modelling of an astounding 215 million proteins last year, providing insights into the shapes of almost any protein. This is particularly interesting for proteins that have not been studied experimentally, a complex and time-consuming process.

"There are now many sources of protein information, enclosing valuable insights into how proteins evolve and work" says Joana Pereira, the leader of the study. Nevertheless, research has long been faced with a data jungle. The research team led by Professor Torsten Schwede, group leader at the Biozentrum, University of Basel, and the Swiss Institute of Bioinformatics (SIB), has now succeeded in decrypting some of the concealed information.

A bird's eye view reveals new protein families and folds

The researchers constructed an interactive network of 53 million proteins with high quality AlphaFold structures. "This network serves as a valuable source for theoretically predicting unknown protein families and their functions on a large scale," underlines Dr. Janani Durairaj, the first author. The team was able to identify 290 new protein families and one new protein fold that resembles the shape of a flower.

Building on the expertise of the Schwede group in developing and maintaining the leading software SWISS-MODEL, they made the network available as an interactive web resource, termed the "Protein Universe Atlas."

AI as a valuable tool in research

The team has employed Deep Learning-based tools for finding novelties in this network, paving the way to innovations in life sciences, from basic to applied research. "Understanding the structure and function of proteins is typically one of the first steps to develop a new drug, or modify their functions by protein engineering, for example," says Pereira. The work was supported by a 'kickstarter' grant from SIB to encourage the adoption of AI in life science resources. It underscores the transformative potential of Deep Learning and intelligent algorithms in research.

With the Protein Universe Atlas, scientists can now learn more about proteins relevant to their research. "We hope this resource will help not only researchers and biocurators but also students and teachers by providing a new platform for learning about protein diversity, from structure, to function, to evolution," says Janani Durairaj.

Read this article:
Artificial Intelligence tools shed light on millions of proteins - Science Daily

Read More..

Using Artificial Intelligence To Advance Development In Africa – Africa.com

By John Njogu

Africa is at the cusp of a technological renaissance, and at the heart of this transformation lies the ever-expanding realm of artificial intelligence (AI). As the continent grapples with both longstanding challenges and emerging opportunities, AI is a potent force that could reshape its future in profound ways. From bolstering healthcare delivery in remote villages to revolutionizing agriculture and leapfrogging infrastructural limitations, Africas journey with AI is not just a story of innovation but a testament to the continents resilience and determination to bridge the digital divide. In this era of AI-driven progress, the potential for Africa is limitless, provided we navigate the terrain of ethical, socio-economic, and policy considerations with unwavering commitment and foresight.

In the era of AIalso known as the Fourth Industrial Revolution, countries worldwide, particularly in North America, Europe and Asia, are investing significantly in leveraging AIs potential for socio-economic growth. These countries are releasing AI policy frameworks correspondingly, while Africa lags in comprehensive AI policy formulation and effective utilization of AI for its own development.

The state of AI adoption in Africa

Numerous obstacles hinder Africas embrace of AI, as reported by Abejide in Responsible AI in Africa. These include basic challenges such as inadequate sanitation, food insecurity, limited internet access and poor education systems. According to a 2022 demographic report on internet usage released by Statista, internet usage is expanding in Africa, with an estimated 570 million users. However, there is variation in technological uptake within the continent. Nations including Nigeria, Egypt and South Africa lead in smartphone adoption, but the continents overall internet penetration is just 43%, well below the global average of 67%.

In the agricultural sector, most Africans still depend on subsistence farming for their livelihoods. Climate change has however drastically affected farm outputs, resulting in many farmers turning to other sources of income that seem more lucrative. Health care in many African countries, especially in rural areas, is not digitalized. Many local clinics and hospitals still use paper for orders and records, an indication of how far we are from a universally digitized healthcare system.

Despite these challenges, the silver lining is that technology has the capacity to rapidly transform challenges into opportunities. For example, in Sierra Leone, various stakeholders like World Vision Sierra Leone have partnered with the local government to invest heavily in digitizing their health system through the Ministry of Health. In Mali, the company Robots Mali employs language processing to create educational content for school children by teaching fundamental concepts in Bambara, a widely spoken local dialect. This initiative addresses the challenge of poor early education performance among young learners who are submitted to instruction in a language that is foreign to them.

The data dilemma: privacy, exploitation and surveillance

The success of AI relies heavily on the availability of robust data to train models. Private companies are collecting massive amounts of data from individuals, some without their knowledge. Data privacy policies are generally impenetrable to the average person, too long and technical to bother with, and difficult to revisit for verification.

The exploitation of African data, and the general absence of data sovereignty, underscores the urgent necessity to improve data privacy and transparency across the continent. At the extreme, people are manipulated into surrendering biographic data, such as in Kenya, where a recent World Coin cryptocurrency campaign had a multitude of Kenyans queuing for hours to surrender their iris biometric data for a meager $49 inducement.

This incidence reflects a repeated narrative in African nations, where personal data is amassed by private entities without sufficient informed consent from the public and without due consideration for data privacy and transparency. It took a groundswell of concern by Kenyan citizens about utilization storage of their data for the government to halt World Coin activities. This incident serves as a potent reminder of the broader issue at hand that needs solving before AI can flourish in Africa a lack of control over personal data and the imperative to safeguard data privacy and transparency in Africa and beyond.

AI policy initiatives and impact on African development

Yet the embrace of AI holds the potential to help African economies grow exponentially. Many African countries have realized this and have started to develop policies that position themselves as AI leaders. The Science for Africa Foundations Science Policy Engagement for African Research (SPEAR) programme is seeking to address the AI policy gaps across sectors in Africa, on the belief that the transformative potential of AI can accelerate achievement of the UNs Sustainable Development Goals and improve economies across the continent.

The SFA Foundations approach emphasizes the importance of diverse, equitable, inclusive, adaptable and stakeholder-owned policies. Through holding regional convenings to engage stakeholders to identify country- and regional-level policy needs, and encouraging dialogue and collaboration, the Foundation aims to formulate effective policies that are evidence-based, and align with development goals, and local contexts. Two of these workshops have already been held in Southern Africa and in West Africa, with more to follow targeting the three remaining regions. At the end of the convenings the SFA Foundation will generate a report on the status of AI in global health in Africa.

Seizing the AI opportunity for a prosperous Africa

AI is not just a distant aspiration for Africa; it is already at work, transforming lives in Sierra Leone and Mali. AI holds immense potential to address the issues Africa is grappling with, ranging from internet access disparities and climate-induced agricultural woes to the critical need for healthcare digitization. However, the journey ahead demands a collective effort.

To harness AIs full potential, Africa must prioritize data privacy and transparency, and safeguard its data resources from exploitation. Its crucial that African nations continue to develop equitable, stakeholder-owned policies through initiatives like the SFA Foundations SPEAR programme. The time is ripe to invest in education and nurture local talent to effectively manage data. In doing so, we can ensure that AI doesnt exacerbate existing inequalities but instead drives us toward a safer, sustainable, and more equitable African future. Let us seize this opportunity to propel Africas development, bridging the inequity gap and foster prosperity through the transformative power of Artificial Intelligence.

See the original post:
Using Artificial Intelligence To Advance Development In Africa - Africa.com

Read More..

Generative Artificial Intelligence (AI): Canadian Government … – JD Supra

The Canadian government continues to take note and react to the widespread use of generative artificial intelligence (AI). Generative AI is a type of AI that generates output that can include text, images or other materials, and is based on material and information that the user inputs (e.g., ChatGPT, Dall-E 2 and Midjourney). In recent development, the Canadian government has: (1) opened up consultation on a proposed Code of Practice (the Code) and provided a proposed framework for the Code;1and (2) published a Guide on the use of Generative AI for federal institutions on September 6th, 2023 (the Guide).2

More generally, as discussed below, as Canadian companies continue to adopt generative AI solutions, they may take note of the initial framework set out for the Code, as well as the information in the Guide, in order to minimize risk and ensure compliance with future AI legislation. A summary of the key points of the proposed Code and Guide is provided below.

The Code is intended for developers, deployers and operators of generative AI systems to avoid harmful impacts of their AI systems and to prepare for, and transition smoothly into, future compliance with the Artificial Intelligence and Data Act (AIDA),3should the legislation receive royal assent.

In particular, the Government has stated that it is committed to developing a code of practice, which would be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA.4For a detailed look into what future AI regulation may look like in Canada, please refer to our blog, Artificial IntelligenceA Companion Document Offers a New Roadmap for Future AI Regulation in Canada.

In the process of developing the Code, the Canadian government has set out a framework for the Code, and has now opened consultation on this framework. To that end, the government is requesting comments on the following elements of the proposed framework:

In the proposed framework for the Code, developers and deployers would be asked to identify ways that their system may attract malicious use (e.g., impersonate real individuals) and take steps to prevent such use from occurring.

Additionally, developers, deployers and operators would be asked to identify the ways that their system may attract harmful inappropriate use (e.g., use of a large language model for medical or legal advice) and again, take steps to prevent this inappropriate from occurring.

To this end, it would be suggested by the Code that developers assess and curate datasets to avoid low-quality data and non-representative datasets/biases. Further, developers, deployers and operators would be advised to implement measures to assess and mitigate risk of biased output (e.g., fine-tuning).

Accordingly, a future Code would recommend that developers and deployers provide a reliable and freely available method to detect content generated by the AI system (e.g., watermarking), as well as provide a meaningful explanation of the process used to develop the system (e.g., provenance of training data, as well as measures taken to identify and address risks).

Additionally, operators would be asked to ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.

A future Code would potentially recommend that deployers and operators of generative AI systems provide human oversight in the deployment and operations of their system. Further, developers, deployers and operators would be asked to implement mechanisms to allow adverse impacts to be identified and reported after the system is made available.

In this vein, a future Code would recommend that developers use a wide variety of testing methods across a spectrum of tasks and contexts (e.g., adversarial testing) to measure performance and identify vulnerabilities. As well, developers, deployers and operators would be asked to employ appropriate cybersecurity measures to prevent or identify adversarial attacks on the system (e.g., data poisoning).

Developers, deployers and operators of generative AI systems may therefore ensure that multiple lines of defence are in place to secure the safety of their system, such as ensuring that both internal and external (independent) audits of their system are undertaken before and after the system is put into operation; and develop policies, procedures and training to ensure that roles and responsibilities are clearly defined, and that staff are familiar with their duties and the organization's risk management practices.

Accordingly, as the framework for the Code evolves through the consultative process, it is expected that it will ultimately provide a helpful guide for Canadian companies involved in the development, deployment and operation of generative AI systems as they prepare for the coming-into-force of AIDA.

The Guide is another example of the Canadian government accounting for the use of generative AI. The Guide provides guidance to federal institutions and their employees on their use of generative AI tools, including identifying challenges and concerns relating to its use, putting forward principles for using it responsibly, and offering policy considerations and best practices.

While the Guide is intended for federal institutions, the issues it addresses may have more universal application to the use of generative AI systems, broadly. Accordingly, organizations may consider referring to the Guide as a guiding template, while developing their own internal AI policies for use of generative AI.

In more detail, the Guide identifies challenges and concerns with the use of generative AI, including the generation of inaccurate or incorrect content (known as "hallucinations") and/or the amplification of biases. More generally, the government notes that generative AI may pose "risks to the integrity and security of federal institutions."8

To mitigate these challenges and risks, the Guide recommends that federal institutions adopt the "FASTER" approach:

Organizations may take heed of the FASTER approach as a potential guiding framework to the development of their own policies on the use of generative AI.

Various other highlights noted by the Guide include the following:

In view of the foregoing, Canadian companies exploring the use of generative AI may take a note of the FASTER principles set out by the Guide, as well as the various best practices proposed.

Taken together, the Code and the Guide provide helpful guidance for organizations who wish to be proactive as they develop their AI policies and ensure they are compliant with AIDA should it receive royal assent.

1Government of Canada, Canadian Guardrails for Generative AI Code of Practice, last modified 16 August 2023 ["Consultation Announcement"].

2Government of Canada, Guide on the use of Generative AI, last modified 6 September 2023 ["The Guide"].

3Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021 (second reading completed by the House of Commons on 24 April 2023).

4Consultation Announcement.

5Consultation Announcement.

6Consultation Announcement.

7Consultation Announcement.

Read the original post:
Generative Artificial Intelligence (AI): Canadian Government ... - JD Supra

Read More..

$4.5 million for three FRQS Dual Chairs in Artificial Intelligence and … – McGill Newsroom

The FRQS Dual AI Chairs Program supports research collaborations across disciplines in pursuit of the significant potential of AI to address some of humanitys greatest health challenges.

The rapid development and deployment of artificial intelligence demands we connect our best and brightest minds, and work to train the next generation of research leaders. In June, the Fonds de Recherche du Qubec Sant (FRQS) announced $4.5 million for three Dual Chairs in Artificial Intelligence and Health/Digital Health and Life Sciences, all three of which were awarded to teams co-directed by McGill researchers. The program brings together researchers with complementary expertise in AI, data sciences and life sciences to address issues and challenges impacting the health of Canadians and the efficiency and effectiveness of the healthcare system. With the investment from this and a previous call in 2021, the program will facilitate simultaneous research training for more than 60 students and postdoctoral fellows in the fields of AI and life sciences.

Each chair will receive $1.5 million, distributed over three years. The Dual AI Chairs are supported in part by the ministre de lconomie, de l'Innovation et de l'nergie. As of July 1st, the programs are actively recruiting trainees.

With this significant support from Le Fonds de recherche du Qubec Sant (FRQS) an emerging generation of researchers will develop the skills and expertise they need to design the health solutions of the future, to make medicine safer, and to advance treatment for some of the most devasting diseases and disorders, said Martha Crago, Vice-Principal, Research and Innovation. The fact that McGill researchers are co-directing all three FRQS Dual AI Chairs is truly impressive, and a testament to the expansive expertise and collaborative spirit of our AI, data sciences, and life sciences research communities, she added.

Professor of Neurology and Neurosurgery and Director of the Centre for Research in Neuroscience (RI-MUHC), Keith Murai, and McGill Professor of Computer Science, Kaleem Siddiqi, will co-direct the Dual AI Chair, Cracking the nanoscopic structural code of the brain: Artificial intelligence and computer vision approaches for brain health, which promises to advance understanding of Alzheimers and other neurodegenerative diseases.

McGill Associate Professor of Medical Physics, John Kildea, and Associate Professor in the Department of Computer Engineering and Software Engineering at Polytechnique Montral, Amal Zouaq, will co-direct the Dual AI Chair, Smart data for smart cancer care a research program that combines expertise in natural language processing, semantic web technologies, and patient-centered data to create knowledge bases in oncology. With the goals of reducing risk and making cancer treatment safer and more effective, Kildea and Zouaq are collaborating to build an AI solution that will combine, consolidate, and exploit unstructured health data.

Mathieu Blanchette, Associate Professor in McGills School of Computer Science will co-direct the Dual AI Chair, Dveloppement dapproches en intelligence artificielle pour lucider les codes de rgulation des ARN et exploiter leur potentiel thrapeutique, with ric Lcuyer of the Montral Clinical Research Institute (IRCM). This program aims to tap into the potential of AI to facilitate discoveries in RNA biology and therapeutics.

Learn more about the three FRQS Dual AI Chairs

Originally posted here:
$4.5 million for three FRQS Dual Chairs in Artificial Intelligence and ... - McGill Newsroom

Read More..