Page 478«..1020..477478479480..490500..»

Clark is breaking down barriers and staking her claim in cybersecurity field – UKNow

LEXINGTON, Ky. (Jan. 22, 2024) Tiffany Clark found herself entangled in a challenging predicament this fall. She grappled with conflicting options as a looming decision deadline cast its shadow. Many young adults her age would have coveted the intricacy of Clarks dilemma: Which well-paying, potentially career-making summer internship should I choose the one nestled in the sunny theme park capital of the world or the other in glitzy proximity to the famed Hollywood sign?

A junior computer science major in the University of Kentuckys Stanley and Karen Pigman College of Engineering, Clark meticulously considered the prospect of interning for either Northrop Grumman in Redondo Beach, California, or Lockheed Martin in Orlando, Florida, come the summer of 2024.

Both alternatives starkly contrast Clarks upbringing in Caneyville, Kentucky. With its modest population of just over 500, the Grayson County community offers scarce professional prospects for individuals skilled in cybersecurity and software engineering.

Following the sudden death of her mother, Tiffany Clark was raised by her maternal grandmother and, later, her maternal aunt and grandfather, alongside her older brother. Throughout her formative years, their ventures seldom extended beyond close-knit Grayson Countys boundaries. With an expanded view of the world, however, she now grapples with the notion of continuing to reside in such a sparsely populated community.

My grandfathers lived there his entire life, Clark laughs. He wont even drive outside the county. If he has to go to Hardin County to look for tractor parts, he calls up his friends to take him. Hes happy living there and not leaving the county, and thats totally fine. I just could never do that.

Clark committed to attending UK without ever setting foot on its campus. Yet, this daring leap of faith sparked a transformative college experience, awakening in her a profound wanderlust for global exploration. A testament to this profound curiosity, she spent two weeks abroad in England and France last year as part of UKs Chellgren Center for Undergraduate Excellence. She aspires to leverage a career in cybersecurity to continue discovering the diverse facets of our increasingly interconnected world and maybe catch a bad guy or two along the way.

Its mainly about having an impact, Clark explains of her career aspirations. Politically, Im very neutral, but I can see that in terms of national security, the government does a lot of great things for us that we may never even know about, and cybersecurity plays a large part in that; so, Im looking into the FBI or the CIA, or a defense contracting company that provides products and does contracts with the government.

Because it can call upon skills in both computer and electrical engineering, Clark says that she enjoys the challenge of finding computer vulnerabilities and trying to find ways people can exploit or hack into systems. She says she did not own a computer until her senior year of high school. Now, however, she is breaking down barriers and staking her claim in the still-male-dominated field of computing.

Being a female computer scientist can be great, Clark explains. It can make you stand out. But it can also be a little intimidating, maybe a bit of impostor syndrome, because your class will be dominated by males. Even with my internship last summer, the team had six interns, and I was the only girl. And in my office, there was only one woman, so it adds a little bit of oh, I need to do great. I need to stand out, but I also needed to interact with these guys all the time.

Globally, only about a quarter of people in cybersecurity roles are female. As vice president of UKs chapter of the Society of Women Engineers, Clark works to empower her peers. Through outreach, she hopes to inspire women of all ages to consider taking STEM courses and develop an inclusive community within the Pigman College of Engineering. She learned about the possible internships at the Women in Engineering conference in Los Angeles, the worlds largest conference for women in engineering and technology.

A Presidential Scholar, Chellgren Fellow and a member of the UK Lewis Honors College, Clark says the support system she found at UK has been instrumental to her academic achievements. In particular, the mentorship she receives from Stanley and Karen Pigman has been particularly gratifying.

Im so thankful to be a Pigman Scholar because of the mentorship they provide, Clark said. Stan and Karen have my phone number! Theyll call us during midterms. Theyll ask Do you need help? What do you need? Have you gone to this resource, that resource? They understand how difficult the coursework can be. Just having that kind of motivation somebody telling you Good job, youre going to do great things is really motivating.

With the Pigmans encouragement, Clark has already completed two summer internships including one in Bostonfor defense contractor Riverside and is determined to take advantage of opportunities as they present themselves.

Clarks pivotal decision to attend UK, coupled with the expansive possibilities inherent in her chosen field and the encouragement and support of mentors, didnt just create opportunities; it shattered the confines of her small-town roots.

When I get older, I really would like to travel somewhere in Asia, like South Korea, Clark enthuses. I do enjoy traveling. I want to go to a lot of different places, but definitely Asia. Its a completely different culture. Going to England or even Paris, you can still see similarities to the U.S., and a lot of people there speak English. But Asia the culture is totally different.

The closest Clark may get to Asia in the immediate future, however, could be a visit to the China or Japan Pavilions at Epcot next summer. After thoughtful discussions with those whose insights she values and meticulously weighing the pros and cons associated with each internship, she ultimately chose the opportunity Lockheed Martin in balmy Orlando.

While I did ask people for their opinion, and they gave it, everyone including the Pigmans told me that this was my decision and they were both good offers, Clark says. No one would have been disappointed, no matter which offer I accepted. Everyone told me to go with my gut and do what I want because, ultimately, its my life and my future. Both offers would have been great, and Im just sad that I couldnt do both.

In the end, the choice boiled down to what is best suited for Clarks future.

I was given an offer in Northrop Grummans space unit, Clark explains of what ultimately decided the Solomonic decision. I believe that Lockheed Martins cyber unit will align better with my interests.

At Lockheed Martin, Clark will be working in Rotary and Mission Systems, working in software engineering with the Training, Logistics and Sustainment team. Clark hopes that her decision will eventually lead to a leadership position where she can one day follow in the steps of her mentors, the Pigmans, and give back to her community and alma mater.

They really encourage the circle of giving back once youve paid your dues, says Clark, who currently works part-time in the colleges philanthropy office. Id like to do something like that, to give back to UK and the Pigman College of Engineering because theyve had such an impact on me.

See the article here:

Clark is breaking down barriers and staking her claim in cybersecurity field - UKNow

Read More..

New MIT CSAIL study suggests that AI wont steal as many jobs as expected – TechCrunch

Image Credits: Kirillm / Getty Images

Will AI automate human jobs, and if so which jobs and when?

Thats the trio of questions a new research study from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), out this morning, tries to answer.

Theres been many attempts to extrapolate out and project how the AI technologies of today, like large language models, might impact peoples livelihoods and whole economies in the future.

Goldman Sachs estimates that AI could automate 25% of the entire labor market in the next few years. According to McKinsey, nearly half of all work will be AI-driven by 2055. A survey from the University of Pennsylvania, NYU and Princeton finds that ChatGPT alone could impact around 80% of jobs. And a report from the outplacement firm Challenger, Gray & Christmas suggests that AI is already replacing thousands of workers.

But in their study, the MIT researchers sought to move beyond what they characterize as task-based comparisons and assess how feasible it is that AI will perform certain roles and how likely businesses are to actually replace workers with AI tech.

Contrary to what one (including this reporter) might expect, the MIT researchers found that the majority of jobs previously identified as being at risk of AI displacement arent, in fact, economically beneficial to automate at least at present.

The key takeaway, says Neil Thompson, a research scientist at MIT CSAIL and a co-author on the study, is that the coming AI disruption might happen slower and less dramatically than some commentators are suggesting.

Like much of the recent research, we find significant potential for AI to automate tasks, Thompson told TechCrunch in an email interview. But were able to show that many of these tasks are not yet attractive to automate.

Now, in an important caveat, the study only looked at jobs requiring visual analysis that is, jobs involving tasks like inspecting products for quality at the end of a manufacturing line. The researchers didnt investigate the potential impact of text- and image-generating models, like ChatGPT and Midjourney, on workers and the economy; they leave that to follow-up studies.

In conductingthis study, the researchers surveyed workers to understand what an AI system would have to accomplish, task-wise, to fully replace their jobs. They then modeled the cost of building an AI system capable of doing all this, and also modeled whether businesses specifically non-farm U.S.-based businesses would be willing to pay both the upfront and operating expenses for such a system.

Early in the study, the researchers give the example of a baker.

A baker spends about 6% of their time checking food quality, according to the U.S. Bureau of Labor Statistics a task that could be (and is being) automated by AI. A bakery employing five bakers making $48,000 per year could save $14,000 were it to automate food quality checks. But by the studys estimates, a bare-bones, from-scratch AI system up to the task would cost $165,000 to deploy and $122,840 per year to maintain . . . and thats on the low end.

We find that only 23% of the wages being paid to humans for doing vision tasks would be economically attractive to automate with AI, Thompson said. Humans are still the better economic choice for doing these parts of jobs.

Now, the study does account for self-hosted, self-service AI systems sold through vendors like OpenAI that only need to be fine-tuned to particular tasks not trained from the ground up. But according to the researchers, even with a system costing as little as $1,000, theres lots of jobs albeit low-wage and multitasking-dependent that wouldnt make economic sense for a business to automate.

Even if we consider the impact of computer vision just within vision tasks, we find that the rate of job loss is lower than that already experienced in the economy, the researchers write in the study. Even with rapid decreases in cost of 20% per year, it would still take decades for computer vision tasks to become economically efficient for firms.

The study has a number of limitations, which the researchers to their credit admit. For example, it doesnt consider cases where AI can augment rather than replace human labor (e.g., analyze an athletes golf swing) or create new tasks and jobs (e.g., maintaining an AI system) that didnt exist before. Moreover, it doesnt factor in all the possible cost savings that can come from pre-trained models like GPT-4.

One wonders whether the researchers mightve felt pressure to reach certain conclusions by the studys backer, the MIT-IBM Watson AI Lab. The MIT-IBM Watson AI Lab was created with a $240 million, 10-year gift from IBM, a company with a vested interest in ensuring that AIs perceived as nonthreatening.

But the researchers assert this isnt the case.

We were motivated by the enormous success of deep learning, the leading form of AI, across many tasks and the desire to understand what this would mean for the automation of human jobs, Thompson said. For policymakers, our results should reinforce the importance of preparing for AI job automation...But our results also reveal that this process will take years, or even decades, to unfold and thus that there is time for policy initiatives to be put into place. For AI researchers and developers, this work points to the importance of decreasing the costs of AI deployments and of increasing the scope of how they can be deployed. These will be important for making AI economically attractive for firms to use for automation.

The rest is here:

New MIT CSAIL study suggests that AI wont steal as many jobs as expected - TechCrunch

Read More..

Senators on Using Artificial Intelligence in Agriculture It’s already here. – AG INFORMATION NETWORK OF THE … – AGInfo Ag Information Network

The Senate Agriculture Committee hosted a hearing on AI and innovation into American agriculture. In her opening statement, Michigan Senator and Chair Debbie Stabenow pointed to agriculture's role in technology.

American agriculture has always been at the forefront of innovation its imperative we strike a balance between harnessing the benefits A.I. offers, while addressing the concerns it raises.

Concerns about A.I. include data privacy, workforce implications, and equitable access to the technology. Stabenow says the reality is A.I. is already being integrated into our daily lives.

In fact, Im going to pause, my entire statement up to this point was generated by A.I. and its something I would have said. So its incredible.

Panelist Dr. Mason Earles with the University of California Davis defines AI.

Put simply, an A.I. is a computer program physical action.

Follow this link:
Senators on Using Artificial Intelligence in Agriculture It's already here. - AG INFORMATION NETWORK OF THE ... - AGInfo Ag Information Network

Read More..

Carnegie Mellon reveals it was hit by a cyberattack over the summer – Engadget

A cyberattack hit Carnegie Mellon University last summer and the attackers breached personal data, according to a disclosure from the school last week. The Pittsburgh-based university known for its top tech and computer science programs said on Friday that the attack impacted 7,300 students, employees, contractors and other affiliates.

"There is no evidence of fraud or inappropriate use of the information from those files," a statement from CMU said. Still, the attackers likely accessed and copied data that included names, social security numbers and birth dates. With help from law enforcement, CMU disabled any access to that copied data, according to the school.

It started on August 25 when unauthorized users accessed CMU's systems. The university says it began recovery processes and an investigation into the incident that included months later in December, while notifications to impacted parties began to go out last week. Impacted parties will receive credit monitoring services to mitigate further damage.

CMU did not respond to a request for comment and further information about the attack by the time of publication.

Excerpt from:

Carnegie Mellon reveals it was hit by a cyberattack over the summer - Engadget

Read More..

AI is the buzz, the big opportunity and the risk to watch among the Davos glitterati – The Associated Press

DAVOS, Switzerland (AP) Artificial intelligence is easily the biggest buzzword for world leaders and corporate bosses diving into big ideas at the World Economic Forums glitzy annual meeting in Davos. Breathtaking advances in generative AI stunned the world last year, and the elite crowd is angling to take advantage of its promise and minimize its risks.

In a sign of ChatGPT maker OpenAIs skyrocketing profile, CEO Sam Altman made his Davos debut to rock star crowds, with his benefactor, Microsoft CEO Satya Nadella, hot on his heels.

Illustrating AIs geopolitical importance like few other technologies before it, the word was on the lips of world leaders from China to France. It was visible across the Swiss Alpine town and percolated through afterparties.

Heres a look at the buzz:

The leadership drama at the AI worlds much-ballyhooed chatbot maker followed Altman and Nadella to the swanky Swiss snows.

Altmans sudden firing and swift rehiring last year cemented his position as the face of the generative AI revolution but questions about the boardroom bustup and OpenAIs governance lingered. He told a Bloomberg interviewer that hes focused on getting a great full board in place and deflected further questions.

At a Davos panel on technology and humanity Thursday, a question about what Altman learned from the upheaval came at the end.

We had known that our board had gotten too small, and we knew that we didnt have a level of experience we needed, Altman said. But last year was such a wild year for us in so many ways that we sort of just neglected it.

Altman added that for every one step we take closer to very powerful AI, everybodys character gets, like, plus 10 crazy points. Its a very stressful thing. And it should be because were trying to be responsible about very high stakes.

From China to Europe, top officials staked their positions on AI as the world grapples with regulating the rapidly developing technology that has big implications for workplaces, elections and privacy.

The European Union has devised the worlds first comprehensive AI rules ahead of a busy election year, with AI-powered misinformation and disinformation the biggest risk to the global economy as it threatens to erode democracy and polarize society, according to a World Economic Forum report released last week.

Chinese Premier Li Qiang called AI a double-edged sword.

Human beings must control the machines instead of having the machines control us, he said in a speech Tuesday.

AI must be guided in a direction that is conducive to the progress of humanity, so there should be a redline in AI development a red line that must not be crossed, Li said, without elaborating.

China, one of the worlds centers of AI development, wants to step up communication and cooperation with all parties on improving global AI governance, Li said.

China has released interim regulations for managing generative AI, but the EU broke ground with its AI Act, which won a hard-fought political deal last month and awaits final sign-off.

European Commission President Ursula von der Leyen said AI is a very significant opportunity, if used in a responsible way.

She said the global race is already on to develop and adopt AI, and touted the 27-nation EUs efforts, including the AI Act and a program pairing supercomputers with small and midsized businesses to train large AI models.

French President Emmanuel Macron said hes a strong believer in AI and that his country is an attractive and competitive country for the industry. He played up Frances role in helping coordinate regulation on deepfake images and videos created with AI as well as plans to host a follow-up summit on AI safety after an inaugural gathering in Britain in November.

The letters AI were omnipresent along the Davos Promenade, where consulting firms and tech giants are among the groups that swoop onto the main drag each year, renting out shops and revamping them into showcase pavilions.

Inside the main conference center, a giant digital wall emanated rolling images of AI art and computer-generated conceptions of wildlife and nature like exotic birds or tropical streams.

Davos-goers who wanted to delve more deeply into the technical ins and outs of artificial intelligence could drop in to sessions at the AI House.

Generative AI systems like ChatGPT and Googles Bard captivated the world by rapidly spewing out new poems, images and computer code and are expected to have a sweeping impact on life and work.

The technology could help give a boost to the stagnating global economy, said Nadella, whose company is rolling out the technology in its products.

The Microsoft chief said hes very optimistic about AI being that general purpose technology that drives economic growth.

Business leaders predicted AI will help automate mundane work tasks or make it easier for people to do advanced jobs, but they also warned that it would threaten workers who cant keep up.

A survey of 4,700 CEOs in more than 100 countries by PwC, released at the start of the Davos meetings, said 14% think theyll have to lay off staff because of the rise of generative AI.

There isnt an area, there isnt an industry thats not going to be impacted by AI, said Julie Sweet, CEO of consulting firm Accenture.

For those who can move with the change, AI promises to transform tasks like computer coding and customer relations and streamline business functions like invoicing, IBM CEO Arvind Krishna said.

If you embrace AI, youre going to make yourself a lot more productive, he said. If you do not ... youre going to find that you do not have a job.

During a session featuring Meta chief AI scientist Yann LeCun, talk about risks and regulation led to the moderators hypothetical example of infinitely conversant sexbots that could be built by anyone using open source technology.

Taking the high road, LeCun replied that AI cant be dominated by a handful of Silicon Valley tech giants if its going to serve people around the world with different languages, cultures and values.

You do not want this to be under the control of a small number of private companies, he said.

Chan reported from London. AP Technology Writer Matt OBrien contributed from Providence, Rhode Island.

This story has been corrected to show the U.K. AI safety summit was in November not October.

The rest is here:
AI is the buzz, the big opportunity and the risk to watch among the Davos glitterati - The Associated Press

Read More..

Researcher Develops Computer Algorithm for Disaster Planning and Response – Georgia State University News

ATLANTA Armin Mikler has been interested in disaster and emergency response since Hurricane Katrina devasted the Gulf Coast in 2005, killing more than 1,800 people, causing more than $100 billion in damage and exposing serious flaws in the nations ability to respond to disasters.

Mikler, chair of the Department of Computer Sciences at Georgia State University, began working with colleagues at the University of North Texas to develop tools to improve disaster planning and response.

Mikler and his group have developed an algorithm, called the receiving-staging-storing-distributing (RSSD) algorithm, to help public health agencies and others develop a fast and effective response, whether theyre dealing with a hurricane or anthrax attack. In recent tests, they confirmed that it was faster and, in many situations, more effective in helping responders get critical supplies where theyre most needed.

In emergency situations, the population in the affected area needs to be essentially divided up so that medication and other resources can be distributed effectively. This requires the creation of drop points, places in the affected area where supplies are delivered from a central point, or depot, such as the Strategic National Stockpile. The number of vehicles needed to deliver supplies, as well as their carrying capacities, are also major factors. When given the capacity of the vehicles and a time limit, the RSSD algorithm can work out ideal routes to points of delivery.

The question needs to be answered, how do we get from the central point where they drop off, how do we deliver it to all the points where its actually needed? Mikler said.

That depends on how many points we have. And that is a very fluid problem, in the case of such emergencies. For instance, we dont know exactly how many points of dispensing would actually be opening.

Because of the fluidity of these situations, the algorithm he developed needed to be fast to respond effectively to the rapidly changing situations. Usually, problems like this would be solved using an algorithm that provides the best possible and most efficient solution, known as an optimization algorithm. However, optimization algorithms take a long time to find an answer.

This is time that we often do not have when we need to reconfigure our plans, Mikler said.

In short, Mikler and his colleagues found that fast and good enough is better in an emergency than taking too long to find a perfect solution.

To see how this algorithm stacked up to the others in both speed and accuracy, Mikler and Ph.D. student Emma McDaniel conducted benchmarking tests with the RSSD algorithm and others.

Mikler and McDaniel recently published the results of these experiments in the article, Benchmarking a fast, satisficing vehicle routing algorithm for public health emergency planning and response: Good Enough for Jazz. They found that even though the RSSD algorithm doesnt find the optimal solution, it does find consistently good solutions that take a minimum amount of response time. So, not only is the algorithm itself fast, but it also finds some of the fastest routes for resource delivery.

To benchmark the results, Mikler and McDaniel used a database called the CVRPLIB (Capacitated Vehicle Routing Problem Library) as their baseline for best answers to emergency situations. The database contains optimal vehicle route distances for various combinations of depot locations and the number of people who need supplies. Using these datasets, Mikler and McDaniel compared the RSSD algorithm to three others that solve similar problems. In terms of consistency, RSSD came out on top.

The algorithm, first developed in 2014, has been improved over time and is now integrated into response planning software that is used by the Texas Department of State Health Services to assist in both emergency response planning as well as real-time emergency response. Sampson Akwafuo, an assistant professor at California State University, has also used the algorithm to help plan emergency resource delivery in resource-poor areas in some African countries.

Were really able to come up with workable, feasible solutions to problems in a much shorter time, Mikler said.

And time is of the essence in disaster and emergency response.

By Katherine Duplessis

Visit link:

Researcher Develops Computer Algorithm for Disaster Planning and Response - Georgia State University News

Read More..

MotoGP, China close to Ducati: Lenovo uses artificial intelligence in the ‘Remote Garage’ – GPOne.com

During each race weekend, the team collects a total of 100 GB of data from the eight Desmosedicis in action, thanks to the approximately 50 sensors present on each. The Chinese company helps the MotoGP team with platforms that exploit the potential of AI

Submitted by Chiara Rainis on Mon, 22/01/2024 - 16:07

On the occasion of the "Campioni in Pista" event organized by Ducati in Madonna di Campiglio, Lenovo unveiled the range of technological solutions with which it will help the MotoGP team in the hunt for a new title.

The competition will be tough, but we are excited to continue working with the teamand making our innovative services available to raise the level of performance. The goal is not only to strive for maximum results, but also to make accessible to all the technological advances developed on the track, as for road bikes", were the words of Luca Rossi, president of the Intelligent Devices Group.

Linked to the Emilian brand since 2018, its task is to develop programs to transform data into information, run complex simulations and make strategic decisions in a few seconds. During each race weekend, the team collects a total of 100 GB of data from the eight Desmosedicis in action, thanks to approximately 50 sensors present on each.

To make analysis even more precise, rapid and detailed, this year a hyperconverged infrastructure, the(HCI) Lenovo ThinkAgile, will be introduced, while promoting mobility and reliability, even in difficult environments, thanks to the Lenovo ThinkSystem SE350 edge servers. This infrastructure, optimized for artificial intelligence, will power the team's deep learning and machine learning tools with the aim of implementing thedata comparisonwith the riders sensations.

The data is not only analysed on the circuit, but also in the Remote Garage, this allows the technicians present in the factory to work on the information in real time, perform complex analyses and collaborate with the group present on the track to optimize the configuration of the bikes before they return to action. With this in mind, the quantity ofLenovo hardware has been increased, including monitors, workstations and accessories.

The departments active at Ducati headquarters are also responsible for the aerodynamic and fluid dynamic simulations, processed with High Performance Computing (HPC) technology based on the ThinkSystem SD530, SR630 and SR650 servers. Furthermore, to meet the needs of a fast-paced sport, a Cloud Solution Provider (CSP) agreement was signed which makes power and additional services available on demand through a public cloud service to quickly adapt to work peaks. ThinkPad P1 mobile workstations will then follow the electronics engineersto the starting grid to finalizethe motorcycle setup.

Also among the new features is theThinkStation P360 Ultra platform. Aself-driving robot equipped with a wide range of inertial and optical sensors, will travel around the circuit at the start of the GP weekend, allowing the team to obtain a digital copy of itas faithful as possible to reality. Through it, a total of 200 GB of information will be collected and processed, for a total of2.6 million data points per second (255 MB/s), through LiDAR (Light Detection And Ranging) sensors.

See the rest here:
MotoGP, China close to Ducati: Lenovo uses artificial intelligence in the 'Remote Garage' - GPOne.com

Read More..

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Read more:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge

Read More..

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The rest is here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

Read More..

College Student Sends 456 Applications, Gets Accepted Into One Internship – Newsweek

A college student has revealed how he was able to stay focused and motivated during an intense period in which he applied for 456 internships.

Oliver Wu is a junior studying computer science at the University of Michigan. He plays an active role in campus life, where he's involved with the university's Asian American community and plays volleyball.

Over the past four months though, Wu has been focused on another task on top of his studies and extracurricular activities: landing an internship.

College students today are increasingly mindful of their future career prospects. A survey of students due to graduate in 2023 conducted by job website Handshake found around half were planning to apply to more jobs, while one third were looking at a more diverse range of roles, and one fifth were starting their search sooner.

Wu has only just started out as a junior, but he's already thinking about the future. He told Newsweek he is seeking a "career in tech," though remains flexible about where that will take him.

"I would love to use my skills to develop solutions in sustainability and environmental protection," he said. "However, I do recognize that I am currently still a college student so there is a lot to learn and my plans may change once I get more experience in the industry."

That desire to seek experience has seen him embark on an exhaustive search to land an internship with a top company. It's a search that began before he even started at college, when he started noticing openings being posted on online job boards over the summer.

"I started applying in July and soon I hit 200 applications," he said. Wu said he "stopped looking at the number of applications" fairly early into the process, but quickly developed a daily routine.

"Usually, I would open up two or three job boards, see what new jobs were posted, and then apply to all of the jobs if the salary, location, roles etc. met what I was looking for," he said. "I also kept track of which companies I had referrals to and checked on a weekly basis if those companies had opened up their applications."

Wu said prior to starting college on his very best days he was completing "15 to 20 applications a day," but that slowed down once he was in class. "On days where I did not apply as much, I would practice my technical interview skills," he added.

He insists he never set out to apply for 456 internships though. "It just kind of happened after applying day in day out," he said. Wu attributes that to the fact that he started applying earlier and continued until late in the year.

During this period, there were times when he felt "burned out" though. "The hardest part was staying positive and working hard, despite having hundreds of rejections," he said.

In those periods, he always made sure to take time off to recharge. He remained motivated though. "I did not want to feel regret that I could have tried harder, so I made up my mind to pursue this with everything I had," he said. He attributes some of that to his religious faith and the belief that whether he succeeded or failed "God has a plan for me."

That plan saw Wu complete an astonishing 56 interviews, as well as 30 technical assessments, 22 second- and third-round assessments and four final rounds off the back of those initial applications.

As stressful as it might have been, those interviews and assessments have proven invaluable. "I feel much less nervous and familiar with the process. Additionally, I know what to expect, and the areas which I need to improve at," he said.

More importantly, at the end of such an intense period of pressure, Wu had something to celebrate. "I ended up accepting an offer at Ford as an enterprise technology intern," he revealed. Wu said the moment he learned he had landed the internship he "felt like a massive weight had been lifted off my shoulders."

"I was in class at the time and I remember stepping out, going into the hallway and jumping up and down while silently screaming in excitement for around 10 minutes," he said. "I ended up landing two more offers, but ultimately accepted Ford."

Eager to share his news and highlight the work that went into it, Wu posted a video to TikTok under the handle oliesandroid revealing how the 456 applications had ultimately been "worth it" in the end. The video has been watched 2.7 million times.

Reflecting on the experience, Wu has one piece of advice for anyone looking to land an internship. "Network," he said. "A big mistake I made was not networking properly and being scared to network and relying on cold applications instead. If I could do it all over again, I would definitely network more. Take a deep breath, and relax, this is a marathon not a sprint."

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

");jQuery(this).remove()})jQuery('.start-slider').owlCarousel({loop:!1,margin:10,nav:!0,items:1}).on('changed.owl.carousel',function(event){var currentItem=event.item.index;var totalItems=event.item.count;if(currentItem===0){jQuery('.owl-prev').addClass('disabled')}else{jQuery('.owl-prev').removeClass('disabled')}if(currentItem===totalItems-1){jQuery('.owl-next').addClass('disabled')}else{jQuery('.owl-next').removeClass('disabled')}})}})})

Read this article:

College Student Sends 456 Applications, Gets Accepted Into One Internship - Newsweek

Read More..