Page 24«..1020..23242526..3040..»

Synergy of Generative, Analytical, Causal, and Autonomous AI – Data Science Central

The current fascination with Generative AI (GenAI) especially as manifested by OpenAIs ChatGPT has raised public awareness of Artificial Intelligence (AI) and its ability to create new sources of customer, product, service, and operational value. Leveraging GenAI tools and Large Language Models (LLMs) to generate new textual, graphical, video, and audio content is astounding.

However, lets not forget about the predictive, understandable, and continuously learning legs of AI analytical AI, which focuses on pattern recognition and prediction; causal AI, which seeks to identify and understand cause-and-effect relationships; and autonomous AI, which aims to operate independently and make real-time decisions. In the ever-evolving landscape of artificial intelligence (AI), four distinct but equally transformative branches have emerged: Generative AI, Analytical AI, Causal AI, and Autonomous AI.

As organizations strive to harness the power of data to drive decision-making and innovation, understanding the differences, similarities, and collaborative potential between these types of AI is crucial. This blog explores these facets, highlighting how combining Generative, Analytical, Causal, and Autonomous AI can unlock unprecedented economic value and create new opportunities for customer, product, service, and operational advancements (Figure 1).

Figure 1: Analytics (AI) Business Model Maturity Index

As always, lets start by establishing some definitions:

As I wrote in an earlier blog titled Generative AI: Precursor to Autonomous Analytics, Generative AI is a foundational technology leading toward developing Autonomous AI. Generative AI, with its ability to create new data and content based on existing patterns, paves the way for more sophisticated autonomous systems. These systems leverage the generative capabilities to enhance their decision-making processes, operate independently, and adapt to dynamic environments. This progression underscores the importance of understanding the interplay between these AI types to fully harness their combined potential in driving innovation and efficiency across various sectors.

Lets create a quick matrix that compares critical aspects of these four different classifications of AI (Table 1).

Table 1: Four Types of Artificial Intelligence (AI)

The synergy of Generative AI, Analytical AI, Causal AI, and Autonomous AI can profoundly impact every industry. Here are just a few examples (Figure 2):

Figure 2: Industry Use Cases: Synergizing Generative, Analytical, Causal, and Autonomous AI

These use cases demonstrate how integrating Generative AI, Analytical AI, Causal AI, and Autonomous AI can drive innovation, efficiency, and effectiveness across various industries, leveraging the strengths of each AI type to create significant value.

To fully realize the benefits of AI technologies, organizations must understand and capitalize on the distinct capabilities of Generative AI, Analytical AI, Causal AI, and Autonomous AI. By synergizing across these different types of AI, organizations can drive innovation, elevate decision-making processes, and optimize operational efficiency. The collective potential of these AI technologies emphasizes the transformative influence of AI in developing advanced, adaptable, and streamlined systems.

Go here to read the rest:

Synergy of Generative, Analytical, Causal, and Autonomous AI - Data Science Central

Read More..

Brain Data Science Platform increases EEG accessibility with open data and research enabled by AWS – AWS Blog

Introduction

About 4.5 million electroencephalogram (EEG) tests are performed in the US each year. Thats more than if every person in Oregon, Connecticut, or Iowa got an EEG. Compared to magnetic resonance imaging (MRI) scans, which use magnetic fields and radio waves to generate images of the structure of the brain, EEGs use wires placed on the scalp to record the brain function as seen through the electrical activity that the brain generates in the process of neurons in the brain sending signals to each other. The Brain Data Science Platform (BDSP), hosted on Amazon Web Services (AWS), is increasing EEG accessibility through cooperative data sharing and research enabled by the cloud.

Because they provide insights into brain activity and not just structure, EEGs are one of the most common tests ordered by doctors to help make a diagnosis for people with brain problems. This includes seizures and epilepsy, coma, stroke, developmental delays in children, and sleep disorders. However, in current practice, EEGs are not always part of a diagnostic plan, even when they could provide important information. Currently, experts trained in EEG are in short supply, and methods to automatically interpret EEGs are not yet advanced enough to fill this gap. For these reasons, diagnostic plans that include EEGs are limited. The cloudincreases EEG accessibility by facilitatingdata sharing and research innovation, makingEEGs more accessible formore patients medical care plans.

Brain cells talk to each other using tiny electrical impulses and are constantly active, even during sleep.Activity is measured by placing small, metal discs (electrodes), held in place by tape or glue, at different locations on your scalp.These electrodes detect the tiny voltage fluctuations from the activity of millions of neurons in the brain.The electrodes are connected to an amplifier, which magnifies the weak electrical signals picked up by the electrodes. The acquired signals are often weak and susceptible to various types of interference, including noise from the environment or other non-brain biological sources. The amplifier performs signal conditioning operations to filter out unwanted noise and artifacts while preserving the relevant brain activity. This can involve processes such as amplification, filtering, and isolation.

Once conditioned, signals are converted from analog to digital format using an analog-to-digital converter (ADC). The ADC samples the analog signals at a specific rate and converts them into a digital representation that can be processed and analyzed by a computer. The digitized EEG signals can undergo further processing, like additional filtering, artifact removal, and feature extraction. Various algorithms and techniques can be applied to extract meaningful information from the EEG signals, depending on the specific analysis or diagnostic purpose. For example, algorithms can detect EEG activity that is normal for a given age, as shown in Figure 1.

Figure 1. A 15-second excerpt from a normal EEG recording from a 42-year-old woman who is awake. The vertical bars are 1 second apart. The channel names indicate locations on the persons head.

Algorithms can also detect harmful brain activity like a seizure, as shown in Figure 2.

Figure 2. A 15-second excerpt from an EEG recording from a 22-year-old man. The high-amplitude rhythmic brain activity is a seizure. Detecting this kind of abnormal electrical activity in the brain helps establish a diagnosis of epilepsy and helps doctors choose a treatment that can prevent future seizures.

The processed EEG signals are stored for later review or analyzed in real time. The data is visualized as waveforms or through spectral analysis, event-related potential analysis, or analysis by machine learning (ML) algorithms to detect abnormalities or patterns of interest.

In the US, about 75 percent of EEGs are interpreted by neurologists without expertise in EEG interpretation. This can lead to mistakes in tricky cases, such as when an EEG looks abnormal but really is not, so some patients who really have a heart problem get misdiagnosed instead with epilepsy.In many parts of the world, patients are unable to get an EEG because the doctors available are not trained to interpret EEGs.People who have sleep problems face similar challenges in getting a diagnosis. Doctors can order a sleep test (which includes recording EEG and other signals overnight while sleeping) to help diagnose sleep problems. However, getting the sleep test done can take a long time or may not be possible because there is a shortage of sleep specialists.

A research team at Harvard Medical School, headed by Drs. Brandon Westover and Robert Thomas at Beth Israel Deaconess Medical Center, and Drs. Valdery Moura Junior and Sahar Zafar at Massachusetts General Hospital aims to make EEGs more easily accessible by using artificial intelligence (AI) to automate medical diagnosis based on EEG.They are joined in this effort by other scientists from several institutions.[i]

The team is working to automate EEG and sleep testing interpretation by developing the Brain Data Science Platform (BDSP), the worlds largest and most diverse set of EEG and sleep testing data. Using this data, the team is constructing algorithms that diagnose sleep disorders, detect seizures and other forms of harmful brain activity in hospitalized patients who are critically ill, predict the risk of future seizures, and calculate the probability that a patient with coma due to brain damage will be able to recover consciousness. To be useful in the real world, they need to cope with EEG patternsfrom all people, regardless of age, gender, race, and ethnicity, and across a vast number of different health conditions. Thus, the algorithms that underlie automated EEG and sleep test interpretation must be well-trained and well-tested so that the resulting diagnoses are just as reliable or more so than can currently be obtained when EEGs are interpreted by human experts with specialty medical training in EEG and sleep test interpretation.

Beyond the diagnostics information currently available from EEGs, the team believes that there is hidden information, especially during sleep, that reveals insights into the health of the brain and which even experts cannot see. The understanding is that each of the different stages of sleep rapid-eye movement (REM), and light and deep stages of non-REM sleep has certain patterns that are normal for a given age and gender. Divergence from these norms can indicate positive or negative deviations from normal health.

The team is developing AI algorithms that use sleep signals to detect early signs of diseases like Parkinsons disease and Alzheimers disease. Earlier detection helps treatments to be given earlier when they can be more effective. The team has already found that information from sleep can predict life expectancy. Finally, one member of the research team, Dr. Haoqi Sun, has developed a way to measure brain age, as distinct from chronologic age. This validates the concept that someone who is 80 can have a brain that functions like someone 20 years younger. Accelerated brain aging (brain age older than chronologic age) is linked to a variety of brain health problems, including declining cognitive functioning and diseases like Alzheimers. The team believes that the ability to measure brain age, and similar types of hidden health information in sleep, may enable doctors to treat diseases more efficiently and effectively while providing more direct ways to measure the effects of those treatments on brain health.

The team has assembled a massive collection of EEG and sleep data currently more than 200,000 EEG recordings and 26,000-plus sleep tests. These span all medical settings where EEGs and sleep tests are performed, including outpatient neurology clinics, epilepsy centers, sleep centers, and home settings where data is collected using wearable consumer devices.For more valuable research, additional metadata is being collected as well. This includes diagnoses, medications, laboratory testing results, and brain imaging including head computed tomography (CT) scans and brain MRI images.

Dr.Westoverintends for any clinic or hospital in the world to benefit from the models that BDSP researchers develop. The benefits would not be limited to clinical groups with access to on-premises powerful servers and clusters.

We intend to offer the machine learning EEG interpretation models as an online service, where sites can upload their EEG data and get their results back within seconds, says Dr.Westover. If we want to engage clinical sites without such a level of internal IT infrastructure and bandwidth, we need to offer them a simple way to access our developments. This is where the cloud is going to be crucial.

Dr. Westover is collaborating with AWS to increase access to brain data and support researchers focusing on brain health. Twelve other hospitals have already committed to adding their data to BDSP on AWS, further increasing the possibilities for new discoveries by making it possible to study rare diseases, which are typically not seen often enough at any single hospital to allow rigorous research. Dr.Westoverbelieves BDSP will transform the field of brain health, paving the way for more personalized and precise ways to diagnose, treat, and prevent neurological disease.

These datasets are going to let us launch the field of precision brain health, said Dr. Westover.

With support from the AWS Open Data Sponsorship Program, the BDSP datasets are now openly available at no cost to researchers around the world. The BDSP dataset is one of the largest collections of brain data in existence.

[i] Collaborators include Junior Moura, PhD, Umakanth Katwa, MD, Wolfgang Ganglberger, PhD, Thijs Nassi, MSc, Erik-Jan Meulenbrugge, MSc, Yalda Amidi, PhD, Jin Jing, PhD, Haoqi Sun, PhD, Mouhsin Shafi, MD, PhD, Daniel Goldenholz, MD, PhD, Arjun Singh, MD, Sahar Zafar, MD, Shibani Mukerji, MD, PhD, Jurriaan Peters, MD, and Tobias Loddenkemper, MD at Harvard Medical School; Aaron Struck, MD at University of Wisconsin; Jennifer Kim, MD, PhD at Yale University; Emmanuel Mignot, MD, PhD and Chris Lee-Messer, MD, PhD at Stanford University; Gari Clifford, PhD, Samaneh Nasiri, PhD, and Lynne Marie Trotti, MD at Emory University; and Dennis Hwang, MD at Kaiser Permanente.

Read the original post:

Brain Data Science Platform increases EEG accessibility with open data and research enabled by AWS - AWS Blog

Read More..

AI readiness requires buy-in, technology and good governance – TechTarget

While data management and analytics are now firmly in a new era with AI by far the main focal point of users' interests and vendors' product development, readiness for AI is key for organizations before they can make use of cutting-edge capabilities.

In another era, the rise of self-service analytics required enterprises to modernize data infrastructures and develop data governance frameworks that balance setting limits on access to data depending on an employees' role while enabling their confident exploration and analysis.

Now, similarly, the era of AI requires organizations to modernize, according to Fern Halper, vice president of research at research and advisory firm TDWI. As a result, top priorities for organizations are supporting sophisticated analytics and making sure data is prepared and available for AI models and applications, according to TDWI's research.

"Organizations are trying to get ready for AI because many of them are viewing it as an imperative for something like digital transformation, competitiveness, operational efficiency and other business drivers," Halper said on July 10 during a virtual conference hosted by TDWI.

Ensuring readiness for developing and deploying AI models and applications is process, she continued. Included in the process are proper data preparation; operational readiness, including the sophisticated data platforms; and appropriate AI governance.

While technology and governance are critical aspects of AI readiness, the process of preparing for AI development and deployment begins with organizational buy-in. Those who want to use AI to surface insights and inform decisions need to get support from the executive suite that trickles down throughout the rest of the organization.

The new era of AI in data management and analytics started in November 2022 when OpenAI released ChatGPT, marking a significant improvement in generative AI capabilities.

Enterprises have long wanted to make analytics use more widespread given that data-driven decision spur growth at a higher rate than decisions made without data. However, due to the complexity of analytics data management platforms, which require coding to carry out most tasks and data literacy training to interpret outputs, analytics use has stagnated for around two decades. Only about a quarter of employees within organizations regularly use data in their workflows.

Generative AI has the potential to change that by enabling the true natural language processing that tools developed by analytics and data management vendors never could. In addition, generative AI tools can be programmed to automate repetitive tasks, which eases burdens placed on data engineers and other data experts.

As a result, many vendors have made generative AI a focus of their product development, building tools such as AI assistants that can be used in concert with an enterprise's data to enable natural language queries and analysis. Simultaneously, many enterprises have made generative AI a focus of their own development, building models and applications that can be used to generate insights and automate tasks.

Still, getting executives to recognize the importance of generative AI sometimes takes effort, according to Halper.

"None of this works if an organization isn't committed to it," she said.

Commitment is an ongoing process barely two years into this new era, Halper continued, noting that a TDWI survey showed that only 10% of respondents have a defined AI strategy in place and another 20% are in the process of implementing an AI strategy. In addition, less thar half of all respondents report that their leadership is committed to investing in the necessary resources, including the people required to work with the requisite tools, such as data operations staff.

To get executive support, it takes demonstrating that existing problems that can be solved with AI capabilities and showing the potential results, such as cost savings or increased growth.

"Your organization is going to need to be made aware of what's needed for AI," she said. "It's really best to understand the business problems you're trying to solve with AI so that you can frame [the need for AI] in a way the business leaders understand. Then you can show how you'll measure value from AI. This can take some doing, but it's necessary to engage the business stakeholders."

Assuming there's organizational support, AI readiness begins with the data at the foundation of any model or application.

Models and applications trained with high quality data will deliver high quality outcomes. Models and applications trained with low-quality data will deliver low-quality outcomes. In addition, the more quality data that can be harnessed to train an AI model or application, the more accurate it will be.

As a result, structured data such as financial and transaction records that has historically informed analytics reports and dashboards is required. In addition, unstructured data such as text and images often left unused is important.

Accessing unstructured data in addition to structured data and transforming that unstructured data to make it discoverable and usable takes a modern data platform. So does combining that data with a large language model, such as ChatGPT or Google Gemini, to apply generative AI.

A 20-year-old data warehouse doesn't have the necessary technology, which includes the compute power, to handle AI workloads. Neither does an on-premises database.

"Organizations are concerned about futureproofing their environment to handle the needs of increased data availability and workload speed and power and scalability for AI," Halper said.

Cloud data warehouses, data lakes and data lakehouses are able to handle the data volume required to inform AI models and applications. Toward that end, spending on cloud-based deployments is increasing while spending on on-premises deployments is dropping.

But that's just a start. The trustworthy data required for AI readiness remains a problem with less than half of those surveyed by TDWI reporting they have a trusted data foundation in place.

Automation can help, according to Halper. By using data management and analytics tools that themselves use AI to automate data preparation, organizations can improve data quality and the trustworthiness of insights.

Data ingestion, integration, pipeline development and curation are complex and labor intensive. Tools that automate those processes improve efficiency given that machines are much faster than humans. They also improve accuracy. No person or team of people can examine every data point among potentially millions for accuracy, whereas machines can be programmed to do so.

"Automation can play a key role in data mapping for accuracy, handling jobs and automating workflows," Halper said. "Where we're seeing most is automation and augmentation for data classification and data quality."

For example, AI-powered tools such as data observability platforms are used to scan data pipelines to identify problem areas.

"Using these intelligent tools is important," Halper said. "Organizations are realizing they need to look for tools that are going to help them with [data readiness for AI]. There are these tool organizations can make use of as they continue to scale their amount of data."

Data quality and proper technology -- in concert with organizational support -- are still not enough on their own to guarantee an enterprise's readiness for developing and deploying AI models and applications.

To protect organizations from potentially exposing sensitive information, violating regulations or simply taking actions without proper due diligence, guidelines must be in place to limit who can access certain AI models and applications as how those can be used.

When self-service analytics platforms were first developed, enabling business users to work with data in addition to the IT teams that historically oversaw all data management and analysis, organizations needed to develop data governance frameworks.

Those data governance frameworks, when done right, simultaneously enable confident self-service analysis and decision-making while protecting the enterprise from harm. As the use of AI models and applications -- specifically generative AI applications that enable more people to engage with data -- becomes more widespread within the enterprise, similar governance frameworks need to be in place for their use.

"For AI to succeed, it's going to require governance," Halper said.

AI requires new types of data, such as text and images. In addition, AI requires the use of various platforms, including data warehouses and lakes, vector databases that enable unstructured data discovery, and retrieval-augmented generation pipelines to train models and applications with relevant data.

Governance, therefore, encompasses diverse data and multiple environments to address AI readiness, according to Halper. Governance also must include oversight of the various types of AI, including generative AI, to determine whether outputs are toxic or incorrect as well as whether there's bias in a model or application.

"The future starts now, and there's a lot to think about," Halper said. "Data management is going to continue to be a journey in terms of managing new data for AI and beyond. Organizations really need to think forward strategically and not be caught off-guard."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Continue reading here:

AI readiness requires buy-in, technology and good governance - TechTarget

Read More..

Using AI to map research in the School of Arts & Sciences – Penn Today

When Colin Twomey became interim executive director of the Data Driven Discovery Initiative (DDDI) last summer, he says, his background in behavioral ecology meant that he had a good idea of the data science needs for his own field and some idea for biology, genetics, and evolution. However, with DDDI serving as the hub for data science education and research across the School of Arts & Sciences, Twomey says he found his understanding of the needs for chemistry, sociology, and other fields to be lacking.

To tackle the problem, he followed his instinct as an ecologist: map out the system and get a big-picture view before digging into the details. What resulted is a work-in-progress map intended to capture all published research by current faculty in SAS, including their work before coming to Penn, encompassing research that spans several decades. It uses the same technology as ChatGPT and similar large language models (LLMs).

I really think of it as like a Google Maps for research. It gives you a very fast way to get oriented to a really big and complex research environment like Penn, Twomey says. He built what he calls the University Atlas Project, or uAtlas for short, during his personal time, and its just one of the ways Penn is leading in data-driven research, teaching, and applications.

At first glance, it might look like a single-cell atlas to a scientist or an abstract design to an artist. While the map is still being worked on, each of the more than 40,000 dots is a different publication by a professorcolor-coded by their departmentand zooming in shows labels for 240 topics. Departments are assigned a specific color. Red is economics. Highlighter-orange is chemistry. Pastel yellow is psychology. Robins-egg blue is Africana studies. Hot pink is cinema and media studies and so forth.

The spatial arrangement shows how thematically similar each paper is in relation to another and illustrates the interdisciplinary pursuits of Penn faculty. Theres all sorts of really unexpected overlaps, and it also doesnt put anyone into a box, Twomey says.

The Department of Physics and Astronomy shows up as very broad, Twomey says. It has its tendrils into everything, which is kind of amazing; it really does accommodate a very broad range of interests, from social sciences and psychology to chemistry.

The multicolored pattern of dots around labels such as inequality, bioethical dilemmas, and COVID-19 impact show how researchers in psychology, sociology, political science, philosophy, economics, Africana studies, and more are leading on the great challenges of our time.

The map is also searchable by name, which shows the varied interests and cross-disciplinary work of Penn faculty. For example, the spread-out clusters for physics professor Vijay Balasubramanian reflect his interests in string theory and neuroscience.

Users can also adjust the view to show only works published before or after a selected year. Twomey was struck by a bridge of green dots, for earth and environmental science, connecting hard science subjectsand specifically the topic of past climate variabilityto the social sciences. The bridge labeled climate communication, Twomey says, didnt start appearing until after about 2004, pointing to research led by Michael Mann.

Twomey says the tool has been useful to him in identifying what is going on in different departments. And he says it can also help faculty identify potential collaborators and prospective graduate students and postdocs determine with whom they want to work. My other hope for this is that, once you do this for long enough, you get these pictures of where the University is evolving over time, where research has moved, Twomey says.

Bhuvnesh Jain, the Walter H. and Leonore C. Annenberg Professor in the Natural Sciences and faculty co-director of DDDI, says he loves that Twomeys map is both sophisticatedin its use of an LLM to embed research papers onto an abstract spaceand visually intuitive.

The map transcends discipline and sub-discipline labels and shows how closely connected a lot of our work is, Jain says, adding that he had fun brainstorming with Twomey on the applications of this tool. I am confident that the users will range from incoming Penn undergraduates to the deans of our schools, who will be able to rapidly visualize the hubs of activity, the interconnections of different research efforts, and the growth areas in different fields.

To build this map, Twomey began by figuring out the affiliations of SAS faculty, which he says was a challenge because the data live in many places across the University. He then used Python to distill the data and a large language model to map the semantic content of each publication into a high-dimensional embedding space. But Twomey says visualizing hundreds of dimensions simultaneously is impractical, so the final map compresses data into a two-dimensional representation that best preserves the relationships between papers that address similar topics.

He next used the programming language Elixir to build a custom web server so the map would appear on a user-friendly website. Twomey then used an LLM again to add the research topics, choosing a labeling system that he felt was neither too dense nor too sparse, so its not overwhelming but still gives you enough waypoints.

To date, the map captures most but not all School of Arts & Sciences faculty as Twomey continues to work on the project. He also notes that some data from indexes like Google Scholar and OpenAlex may be incorrect, meaning a professor may show up as incorrectly attached to a paper or the year is wrong, so additional validation is needed. Twomeys goal is to eventually include research from graduate students and postdocs as well and to expand beyond SAS.

The School of Arts and Sciences has 28 departments and 34 centers, and seeing how all those intersect is super fascinating, but thats just one piece, one school, Twomey says. I want to have this Penn-wide and even scale it beyond Penn in the future.

More here:

Using AI to map research in the School of Arts & Sciences - Penn Today

Read More..

Building a Data Science Platform with Kubernetes | by Avinash Kanumuru | Jul, 2024 – Towards Data Science

Photo by Growtika on Unsplash

When I started in my new role as Manager of Data Science, little did I know about setting up a data science platform for the team. In all my previous roles, I had worked on building models and to some extent deploying models (or at least supporting the team that was deploying models), but I never needed to set up something from scratch (infra, I mean). The data science team did not exist then.

So first of my objective was to set up a platform, not just for the data science team in a silo, but that can be integrated with data engineering and software teams. This is when I was introduced to Kubernetes (k8s) directly. I had heard of it earlier but hadnt worked beyond creating docker images and someone else would deploy in some infra.

Now, why is Kubernetes required for the data science team? What are some of the challenges faced by data science teams?

Continued here:

Building a Data Science Platform with Kubernetes | by Avinash Kanumuru | Jul, 2024 - Towards Data Science

Read More..

Academic Career & Executive Search (ACES) Selected by East Tennessee State University for Tenure-Track Assistant Professor of Computing Computer…

PRESS RELEASES Academic Career & Executive Search (ACES) Selected by East Tennessee State University for Tenure-Track Assistant Professor of Computing Computer Science Search

(June 2024, WEST HARTFORD, CT) Academic Career & Executive Search (ACES), a leading higher education focused executive search firm, has been selected to recruit the next Tenure-Track Assistant Professor of Computing Computer Science at Eastern Tennessee State University. Jennifer Muller, Managing Partner, will be leading the search.

An exceptional opportunity awaits an educator passionate about teaching, mentoring and fostering student success at both the undergraduate and graduate level at East Tennessee State University (ETSU). Working with a strong research foundation at ETSU, the Assistant Professor of Computing Computer Science will help shape and build an innovative, nationally recognized research program in the Department of Computing. The ideal candidate has a Ph.D. in computer science with the ability to break down and teach complex subject matter to enhance student understanding. Equally important is a passion and drive to promote student success by guiding them in research, sharing knowledge and making meaningful connections. Successful applicants will also engage in service activities in the department, university, and community. This is a nine-month position (mid-August through mid-May), with opportunities to conduct independent studies and supervise undergraduate and graduate research and thesis activities available.

Nestled in the Appalachian Highlands, ETSU is a thriving R2 research university spanning over 350 acres in Johnson City, Tennessee. The University has a diverse student body of 14,000 students from 71 countries and all 50 states, and 800 full-time faculty. ETSU has 160+ academic programs to choose from at the bachelors, masters, and doctoral levels housed across eleven colleges and schools. The Department of Computing began as a program in the Department of Mathematics in 1972 and became a standalone Department in 1977. The Department is housed within the College of Business and Technology (CBAT), which includes programs in Accountancy, Digital Media, Economics and Finance, Engineering/Engineering Technology, Interior Architecture, Management, Marketing, and Surveying. The Department has more than 500 undergraduate majors, 150 graduate majors, and 25 faculty members. All in-person Computing courses are taught in the recently renovated Brinkly Center, which includes four computer labs, four lecture rooms, and two auditorium-style classrooms. ETSU was named the states best Bachelor of Science in Computing for the Information Systems Concentration by CollegeFactual.com and Universities.com.

Affordable Excellence Academic Career & Executive Search, a higher education focused search firm, specializes in affordable excellence, providing perfectly matched candidates to institutions quickly and accurately. Their methodologies, based on frequent touchpoints, pinpoint ideal candidates with speed and meticulous care resulting in highly successful placements.

They offer flexible service/pricing levels for all positions, ensuring affordability.

Contact Jennifer Muller, Managing Partner, at 860-740-2600 or Jennifer@ACESrch.com, or visit https://acesrch.com/.

More:

Academic Career & Executive Search (ACES) Selected by East Tennessee State University for Tenure-Track Assistant Professor of Computing Computer...

Read More..

UC computer science engineer works to improve AI explainability – University of Cincinnati

The personal and professional growth I have experienced during my time at UC has been remarkable. One achievement I'm proud of is my research project on developing provenance-based solutions for explainable machine learning models. I have presented this work at prestigious conferences like the Institute of Electrical and Electronics Engineers International Conference on Data Engineering, American Association for the Advancement of Science Annual Meeting, and the Greater Chicago Area Systems Research Workshop.

In addition, I have had the honor of contributing to the scholarly community by serving as a program committee member and a reviewer on the research paper track for various conferences.

I take great pride in my contributions to professional organizations in science and engineering such as the Institute of Electrical and Electronics Engineers. Currently, I am serving a two-year leadership commitment on the Society of Women Engineers' national organization and was honored to speak at two panel sessions dedicated to advancing graduate engineering at the 2023 conference.

Equally fulfilling has been my role in nurturing the next generation of scientists and engineers. I served as a mentor for the UC McNair Scholars, the Society of Women Engineers'"Invent It, Build It" program for K-12 students, and as a judge at events such as the Ohio Academy of Science District Science Day and State Science Day. I was also a counselor at various CEAS summer camps and a professional reviewer of research posters at the 2023 UC Undergraduate Scholarly Showcase.

Reflecting on my journey, I am grateful for the consistency and excellence that have marked my experience. Earning my master's degree in 2023 from UC was a significant milestone towards completing my doctorate. I am also honored by the accolades I have received, including being named Graduate Student Engineer of the Month, and receiving the 2023 Cadence Diversity in Technology Scholarship Award.

Go here to see the original:

UC computer science engineer works to improve AI explainability - University of Cincinnati

Read More..

More people are turning to mental health AI chatbots. What could go wrong? – National Geographic

Chatbots replace talk therapy

The accessibility and scalability of digital platforms can significantly lower barriers to mental health care and make it available to a broader population, said Nicholas Jacobson, who researches the use of tech to enhance the assessment and treatment of anxiety and depression at Dartmouth College.

Swept up by a wave of Generative AI, tech companies have been quick to capitalize. Scores of new apps like WHOs digital health worker, Sarah offer automated counseling, where people can engage in cognitive behavioral therapy sessionsa psychotherapeutic treatment thats proven to assist users in identifying and changing negative thought patternswith an AI chatbot.

The arrival of AI, Jacobson adds, will enable adaptive interventions and allow healthcare providers to continuously monitor patients, anticipate when someone may need support, and deliver treatments to alleviate symptoms.

Its not anecdotal either: A systematic review of mental health chatbots found AI chatbots could dramatically cut down symptoms of depression and distress, at least in the short term. Another study used AI to analyze more than 20 million text conversations from real counseling sessions and successfully predicted patient satisfaction and clinical outcomes. Similarly, other studies have been able to detect early signs of major depressive disorder from unguarded facial expressions captured during routine phone unlocks and peoples typing patterns.

Most recently, Northwestern University researchers devised a way to identify suicidal behaviour and thoughts without psychiatric records or neural measures. Their AI model estimated self-harm likelihood in 92 out of 100 cases based on data from simple questionnaire responses and behavioral signals like ranking a random sequence of pictures on a seven-point like-to-dislike scale from 4,019 participants.

Two of the studys authors, Aggelos Katsaggelos and Shamal Lalvani expectonce the model clears clinical trialsspecialists to use it for support, such as scheduling patients depending on perceived urgency and eventually, roll out to the public in at-home settings.

But as was evident in Smiths experience, experts urge caution over treating tech solutions as the panacea since they lack the skill, training, and experience of human therapists, especially Generative AI, which can be unpredictable, make up information, and regurgitate biases.

When Richard Lewis, a Bristol-based counselor and psychotherapist, tried Woebota popular script-based mental health chatbot that can only be accessed via a partner healthcare providerto help a topic he was also exploring with his therapist, the bot failed to pick up on the issues nuances, suggested he "stick to the facts, while removing all the emotional content from his replies, and advised reframing his negative thoughts as a positive.

As a therapist, Lewis said, correcting or erasing emotions is the last thing I would want a client to feel and the last thing I would ever suggest.

Our job is to form a relationship that can hold difficult emotions, Lewis added, and feelings for our clients to make it easier for them to explore, integrate, or find meaning in them and ultimately know themselves better.

Read the rest here:

More people are turning to mental health AI chatbots. What could go wrong? - National Geographic

Read More..

How to assess a general-purpose AI models reliability before its deployed – MIT News

Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences.

To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task.

They do this by training a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.

When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of classification tasks.

Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.

All models can be wrong, but models that know when they are wrong are more useful. The problem of quantifying uncertainty or reliability gets harder for these foundation models because their abstract representations are difficult to compare. Our method allows you to quantify how reliable a representation model is for any given input data, says senior author Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).

He is joined on a paper about the work by lead author Young-Jin Park, a LIDS graduate student; Hao Wang, a research scientist at the MIT-IBM Watson AI Lab; and Shervin Ardeshir, a senior research scientist at Netflix. The paper will be presented at the Conference on Uncertainty in Artificial Intelligence.

Counting the consensus

Traditional machine-learning models are trained to perform a specific task. These models typically make a concrete prediction based on an input. For instance, the model might tell you whether a certain image contains a cat or a dog. In this case, assessing reliability could simply be a matter of looking at the final prediction to see if the model is right.

But foundation models are different. The model is pretrained using general data, in a setting where its creators dont know all downstream tasks it will be applied to. Users adapt it to their specific tasks after it has already been trained.

Unlike traditional machine-learning models, foundation models dont give concrete outputs like cat or dog labels. Instead, they generate an abstract representation based on an input data point.

To assess the reliability of a foundation model, the researchers used an ensemble approach by training several models which share many properties but are slightly different from one another.

Our idea is like counting the consensus. If all those foundation models are giving consistent representations for any data in our dataset, then we can say this model is reliable, Park says.

But they ran into a problem: How could they compare abstract representations?

These models just output a vector, comprised of some numbers, so we cant compare them easily, he adds.

They solved this problem using an idea called neighborhood consistency.

For their approach, the researchers prepare a set of reliable reference points to test on the ensemble of models. Then, for each model, they investigate the reference points located near that models representation of the test point.

By looking at the consistency of neighboring points, they can estimate the reliability of the models.

Aligning the representations

Foundation models map data points in what is known as a representation space. One way to think about this space is as a sphere. Each model maps similar data points to the same part of its sphere, so images of cats go in one place and images of dogs go in another.

But each model would map animals differently in its own sphere, so while cats may be grouped near the South Pole of one sphere, another model could map cats somewhere in the Northern Hemisphere.

The researchers use the neighboring points like anchors to align those spheres so they can make the representations comparable. If a data points neighbors are consistent across multiple representations, then one should be confident about the reliability of the models output for that point.

When they tested this approach on a wide range of classification tasks, they found that it was much more consistent than baselines. Plus, it wasnt tripped up by challenging test points that caused other methods to fail.

Moreover, their approach can be used to assess reliability for any input data, so one could evaluate how well a model works for a particular type of individual, such as a patient with certain characteristics.

Even if the models all have average performance overall, from an individual point of view, youd prefer the one that works best for that individual, Wang says.

However, one limitation comes from the fact that they must train an ensemble of large foundation models, which is computationally expensive. In the future, they plan to find more efficient ways to build multiple models, perhaps by using small perturbations of a single model.

This work is funded, in part, by the MIT-IBM Watson AI Lab, MathWorks, and Amazon.

Visit link:

How to assess a general-purpose AI models reliability before its deployed - MIT News

Read More..

Computer Science head shares Google encryption research at International Technologies Conference – Grambling State University

Dr. Bharat Rawal

Professor and head of Grambling State Universitys Department of Computer Science and Digital Technologies Dr. Bharat S. Rawal recently discussed a pair of papers hes authored as he appeared at the 15th International Conference on Computing, Communication and Networking Technologies) ICCCNT is a premier IEEE conference organized at IIT-Mandi, India.

The two papers Dr. Rawal wrote and discussed during the conference were titled Quantum and AI-Enhanced Digital Twin Systems and EUDRL: Explainable Uncertainty-Based Deep Reinforcement Learning for Portfolio Management.

Dr. Rawal said the goal of his papers and conference appearance was to create a unique handshake (authentication procedure RLP protocol) and security (cryptography) mechanism to enable secure communication across several cloud platforms.

The proposed encryption technique (Quantum Safe / Post Quantum Cryptosystem) will withstand attacks from quantum computers, Dr. Rawal said. In addition, we are looking into the issues of shifting to next-generation post-quantum crypto systems in cloud infrastructure. We created an RLP protocol using our own No-Sum (NS) difficult mathematical sequence.

Finding the element of an NS sequence is a challenging task known as NP hard in computer science. The solution exists; however, determining the correct value requires a significant amount of processing power and time. Verification of the solution takes no time.

Dr. Rawal said Google can be greatly beneficial because it provides precise results.

Search results from Google are accurate because their algorithms are among the best in the world, Dr. Rawal said. To increase relevance, they make use of user behavior and latent semantic indexing. Improved AI-enabled search features facilitate finding rapid answers, Google provides response boxes and People Also Ask sections.

Google has a massive database. Its large user base allows it to filter results based on user behavior, which makes sure that relevant content appears first. Users should search for research papers that have been published in peer-reviewed journals, as these publications are subject to a high level of examination which helps to reduce inaccuracies.

But because Google gathers user data, privacy concerns should also be a consideration for users.

Google provides customized results that have the potential to produce filter bubbles, which restrict exposure to a range of viewpoints, Dr. Rawal said.

It also has commercial bias. Google depends on advertisements for its business, which could sway search results. It can also have algorithmic biases where content or opinions may be unintentionally favored by algorithms.

Dr. Rawal said his appearance at the international conference will hopefully spread knowledge of GSUs Science and Digital Technologies programs across the globe.

IIT and NIT are two of Indias most prestigious university systems, as well as internationally recognized educational institutions, Dr. Rawal said. Our goal is to collaborate with these two institutes to foster AI and quantum computing initiatives for all of our partner institutions.

Millions of Indian high school and college students apply to these two famous institutions. Becoming their academic and research partner will help us market our university to the worlds fastest-growing economy and the greatest source of international students to the United States.

VK

Facebook

Twitter

Google+

Post Views: 734

Go here to read the rest:

Computer Science head shares Google encryption research at International Technologies Conference - Grambling State University

Read More..