Category Archives: Artificial General Intelligence

Students who use AI to cheat warned they will be exposed as detection services grow in use – Fox News

Companies that develop software to detect if artificial intelligence or humans authored an essay or other written assignment are having a windfall moment amid ChatGPTs wild success.

ChatGPT launched last November and quickly grew to 100 million monthly active users by January, setting a record as the fastest-growing user base ever. The platform has been especially favored by younger generations, including students in middle school through college.

Surveys have found that about 30% of college students reported using ChatGPT for school assignments since the platform launched, while half of college students say using the system is a form of cheating.

AI detection companies, such as Winston AI and Turnitin, are revealing that the wild success for ChatGPT has also benefited the tech detection firms as teachers and employers look to weed out people submitting computer-generated materials as produced by humans.

AI LIKENED TO GUN DEBATE AS COLLEGE STUDENTS STAND AT TECH CROSSROADS

The OpenAI logo on a website displayed on a phone screen and ChatGPT in the AppStore displayed on a phone screen in Krakow, Poland, June 8, 2023. (Jakub Porzycki/NurPhoto via Getty Images)

"It all happened within a week or two. Suddenly, we couldnt keep up with demand," John Renaud, the co-founder of Winston AI, told The Guardian.

Winston AI is billed as the "most powerful AI content detection solution" on the market, with 99% accuracy, according to the company. Users can upload written content they want verified, and, in just a matter of seconds, the system will report if the materials were likely generated by a computer system such as ChatGPT or written by a human.

COLLEGE STUDENTS OPEN UP ABOUT ARTIFICIAL INTELLIGENCE IN THE CLASSROOM: 'EVERYONE IS USING CHATGPT'

Winston AI will provide users with a "scale of 0-100, the percentage of odds a copy is generated by a human or AI," as well as look for potential plagiarism.

Renaud explained that AI-generated materials have "tells" that expose it as computer-generated, including "perplexity" and "burstiness." Perplexity is defined by the company as tracking language patterns in a writing sample and determining if it follows how an AI system was trained or if it appears to be unique and written by a human.

Burstiness is "when a text features a cluster of words and phrases that are repeated within a short span of time."

Renaud told Fox News Digital he believes "the main question and concern with AI detection is if it will become undetectable one day."

"The fundamentals of generative AI works with predictive data," he explained. "All the models, including ChatGPT, Bard, Claude, Stability Text, have been trained on large datasets and will return outputs that are predictableby well-built and trained AI detectors. I strongly believe this will be the case until there is true AGI (Artificial General Intelligence). But, for now, that is still science fiction.

"So, in the same way that generative AI is trained on large datasets, we trained our detector to identify key patterns in synthetic texts through deep learning."

Renaud said he was initially "very worried" about ChatGPT, but his worries have since eased. AI will always have "tells" that other platforms can detect, he said.

LIBERAL MEDIA COMPANY'S AI-GENERATED ARTICLES ENRAGE, EMBARRASS STAFFERS : F---ING DOGS--T

"With predictive AI, well always be able to build a model to predict it," he told The Guardian.

The interior of a school classroom (iStock)

The Winston AI co-founder said the platform is mostly used to scan school essays, while "publishers scanning their journalists/copywriters work before publishing" has gained traction and landed in the second spot for the platforms most common use.

"AI detection needs are likely to grow outside of academia. We have a lot of publishers and employers who would like to get clarity on the originality of the content they publish," Renaud added in comments to Fox News Digital.

The chief product officer of Turnitin, another company that detects AI-generated materials, recently published a letter to the editor of The Chronicle of Higher Education arguing that AI materials are easily detected.

EDUCATORS ARE EXPLORING AI SYSTEMS TO KEEP STUDENTS HONEST IN THE AGE OF CHATGPT

Turnitins Annie Chechitelli responded to an essay published in The Chronicle of Higher Education authored by a student at Columbia University who said, "No professor or software could ever pick up on" materials submitted by students but actually written by a computer.

"In just the first month that our AI detection system was available to educators, we flagged more than 1.3 million academic submissions as having more than 80 percent of their content likely written by AI, flags that alert educators to take a closer look at the submission and then use the information to aid in their decision-making," Chechitelli wrote.

She added that students might assume todays technology cant detect AI-generated school work, but they are simultaneously making a poor bet that tomorrows technology wont pick up on the cheating.

ChatGPT in an illustration from May 4, 2023 (REUTERS/Dado Ruvic/Illustration)

"Even if you succeed in sneaking past an AI detector or your professor, academic work lives forever, meaning that youre not just betting you are clever enough, or your process elegant enough, to fool the checks that are in place today youre betting that no technology will be good enough to catch it tomorrow. Thats not a good bet," she wrote.

Similar to Renaud, Chechitelli argued that AI materials will always have "tells" and that tech companies looking to uncover the computer-generated materials have crafted new ways to expose the AI-generated materials.

IVY LEAGUE UNIVERSITY UNVEILS PLAN TO TEACH STUDENTS WITH AI CHATBOT THIS FALL: 'EVOLUTION' OF 'TRADITION'

"We think there will always be a tell," she told The Guardian. "And were seeing other methods to unmask it. We have cases now where teachers want students to do something in person to establish a baseline. And keep in mind that we have 25 years of student data to train our model on."

Chechitelli said Turnitin has also seen a spike in use since the release of ChatGPT last year and that teachers have put more emphasis on thwarting cheating than in previous years.

One type of generative AI, ChatGPT, has recently taken the world by storm. (iStock)

"A survey is conducted every year of teachers top instructional challenges. In 2022 preventing student cheating was 10th," she said. "Now, its number one."

College students surveyed by College Rover earlier this year reported that 36% of their professors threatened to fail them if they were caught using AI for coursework. Some 29% of students surveyed said their college has issued guidance on AI, while the majority of students, at 60%, said they dont believe their school should outright ban AI technologies.

Amid concern students will increasingly cheat via AI, some colleges in the U.S. have moved to embrace the revolutionary technology, implementing it into classrooms to assist with teaching and coursework.

CLICK HERE TO GET THE FOX NEWS APP

Harvard University, for example, announced it will employ AI chatbots this fall to assist teaching a flagship coding class at the school. The chatbots will "support students as we can through software and reallocate the most useful resources the humans to help students who need it most," according to Harvard computer science professor David Malan.

Continued here:

Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News

Dr. ChatGPT Will Interface With You Now – IEEE Spectrum

If youre a typical person who has plenty of medical questions and not enough time with a doctor to ask them, you may have already turned to ChatGPT for help. Have you asked ChatGPT to interpret the results of that lab test your doctor ordered? The one that came back with inscrutable numbers? Or maybe you described some symptoms youve been having and asked for a diagnosis. In which case the chatbot probably responded with something that began like, Im an AI and not a doctor, followed by some at least reasonable-seeming advice. ChatGPT, the remarkably proficient chatbot from OpenAI, always has time for you, and always has answers. Whether or not theyre the right answers...well, thats another question.

One question was foremost in his mind: How do we test this so we can start using it as safely as possible?

Meanwhile, doctors are reportedly using it to deal with paperwork like letters to insurance companies, and also to find the right words to say to patients in hard situations. To understand how this new mode of AI will affect medicine, IEEE Spectrum spoke with Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School. Kohane, a practicing physician with a computer science Ph.D., got early access to GPT-4, the latest version of the large language model that powers ChatGPT. He ended up writing a book about it with Peter Lee, Microsofts corporate vice president of research and incubations, and Carey Goldberg, a science and medicine journalist.

In the new book, The AI Revolution in Medicine: GPT-4 and Beyond, Kohane describes his attempts to stump GPT-4 with hard cases and also thinks through how it could change his profession. He writes that one question became foremost in his mind: How do we test this so we can start using it as safely as possible?

Isaac Kohane on:

Back to top

IEEE Spectrum: How did you get involved in testing GPT-4 before its public launch?

Isaac Kohane: I got a call in October from Peter Lee who said he could not even tell me what he was going to tell me about. And he gave me several reasons why this would have to be a very secret discussion. He also shared with me that in addition to his enthusiasm about it, he was extremely puzzled, losing sleep over the fact that he did not understand why it was performing as well as it did. And he wanted to have a conversation with me about it, because health care was a domain that hes long been interested in. And he knew that it was a long-standing interest to me because I did my Ph.D. thesis in expert systems back in the 1980s. And he also knew that I was starting a new journal, NEJM AI.

What I didnt share in the book is that it argued with me. There was one point in the workup where I thought it had made a wrong call, but then it argued with me successfully. And it really didnt back down.Isaac Kohane, Harvard Medical School

He thought that medicine was a good domain to discuss, because there were both clear dangers but also clear benefits to the public. Benefits: If it improved health care, improved patient autonomy, improved doctor productivity. And dangers: If things that were already apparent at that time such as inaccuracies and hallucinations would affect clinical judgment.

You described in the book your first impressions. Can you talk about the wonder and concern that you felt?

Kohane: Yeah. I decided to take Peter at his word about this really impressive performance. So I went right for the jugular, and gave it a really hard case, and a controversial case that I remember well from my training. I got called down to the newborn nursery because they had a baby with a small phallus and a scrotum that did not have testicles in it. And thats a very tense situation for parents and for doctors. And its also a domain where the knowledge about how to work it out covers pediatrics, but also understanding hormone action, understanding which genes are associated with those hormone actions, which are likely to go awry. And so I threw that all into the mix. I treated GPT-4 as if it were just a colleague and said, Okay, heres a case, what would you do next? And what was shocking to me was it was responding like someone who had gone through not only medical training, and pediatric training, but through a very specific kind of pediatric endocrine training, and all the molecular biology. Im not saying it understood it, but it was behaving like someone who did.

And that was particularly mind-blowing because as a researcher in AI and as someone who understood how a transformer model works, where the hell was it getting this? And this is definitely not a case that anybody knows about. I never published this case.

And this, frankly, was before OpenAI had done some major aligning on the model. So it was actually much more independent and opinionated. What I didnt share in the book is that it argued with me. There was one point in the workup where I thought it had made a wrong call, but then it argued with me successfully. And it really didnt back down. But OpenAI has now aligned it, so its a much more go-with-the-flow, user-must-be-right personality. But this was full-strength science fiction, a doctor-in-the-box.

At unexpected moments, it will make stuff up. How are you going to incorporate this into practice?Isaac Kohane, Harvard Medical School

Did you see any of the downsides that Peter Lee had mentioned?

Kohane: When I would ask for references, it made them up. And I was saying, okay, this is going to be incredibly challenging, because heres something thats really showing genuine expertise in a hard problem and would be great for a second opinion for a doctor and for a patient. Yet, at unexpected moments, it will make stuff up. How are you going to incorporate this into practice? And were having a tough enough time with narrow AI in getting regulatory oversight. I dont know how were going to do this.

You said GPT-4 may not have understood at all, but it was behaving like someone who did. That gets to the crux of it, doesnt it?

Kohane: Yes. And although its fun to talk about whether this is AGI [artificial general intelligence] or not, I think thats almost a philosophical question. In terms of putting my engineer hat on, is this substituting for a great second opinion? And the answer is often: yes. Does it act as if it knows more about medicine than an average general practitioner? Yes. So thats the challenge. How do we deal with that? Whether or not its a true sentient AGI is perhaps an important question, but not the one Im focusing on.

Back to top

You mentioned there are already difficulties with getting regulations for narrow AI. Which organizations or hospitals will have the chutzpah to go forward and try to get this thing into practice? It feels like with questions of liability, its going to be a really tough challenge.

Kohane: Yes, it does, but whats amazing about itand I dont know if this was the intent of OpenAI and Microsoft. But by releasing it into the wild for millions of doctors and patients to try, it has already triggered a debate that is going to make it happen regardless. And what do I mean by that? On the one hand, look on the patient side. Except for a few lucky people who are particularly well connected, you dont know whos giving you the best advice. You have questions after a visit, but you dont have someone to answer them. You dont have enough time talking to your doctor. And thats why, before these generative models, people are using simple search all the time for medical questions. The popular phrase was Dr. Google. And the fact is there were lots of problematic websites that would be dug up by that search engine. In that context, in the absence of sufficient access to authoritative opinions of professionals, patients are going to use this all the time.

We know that doctors are using this. Now, the hospitals are not endorsing this, but doctors are tweeting about things that are probably illegal.Isaac Kohane, Harvard Medical School

So thats the patient side. What about the doctor side?

Kohane: And you can say, Well, what about liability? We know that doctors are using this. Now, the hospitals are not endorsing this, but doctors are tweeting about things that are probably illegal. For example, theyre slapping a patient history into the Web form of ChatGPT and asking to generate a letter for prior authorization for the insurance company. Now, why is that illegal? Because there are two different products that ultimately come from the same model. One is through OpenAI and then the other is through Microsoft, which makes it available through its HIPAA-controlled cloud. And even though OpenAI uses Azure, its not through this HIPAA-controlled process. So doctors technically are violating HIPAA by putting private patient information into the Web browser. But nonetheless, theyre doing it because the need is so great.

The administrative pressures on doctors are so great that being able to increase your efficiency by 10 percent, 20 percent is apparently good enough. And its clear to me that because of that, hospitals will have to deal with it. Theyll have their own policies to make sure that its safer, more secure. So theyre going to have to deal with this. And electronic record companies, theyre going to have to deal with it. So by making this available to the broad public, all of a sudden AI is going to be injected into health care.

Back to top

You know a lot about the history of AI in medicine. What do you make of some of the prior failures or fizzles that have happened, like IBM Watson, which was touted as such a great revolution in medicine and then never really went anywhere?

Kohane: Right. Well, you had to watch out about when your senior management believes your hype. They took a really impressive performance of Watson on Jeopardy!that was genuinely groundbreaking performance. And they somehow convinced themselves that this was now going to work for medicine And created unreasonably high goals. At the same time, it was really poor implementation. They didnt really hook it well into the live data of health records and did not expose it to the right kind of knowledge sources. So it both was an overpromise, and it was underengineered into the workflow of doctors.

Speaking of fizzles, this is not the first heyday of artificial intelligence, this is perhaps the second heyday. When I did my Ph.D., there are many computer scientists like myself who thought the revolution was coming. And it wasnt, for at least three reasons: The clinical data was not available, knowledge was not encoded in a good way, and our machine-learning models were inadequate. And all of a sudden there was that Google paper in 2017 about transformers, and in that blink of an eye of five years, we developed this technology that miraculously can use human text to perform inferencing capabilities that wed only imagined.

When youre driving, its obvious when youre heading into a traffic accident. It might be harder to notice when a LLM recommends an inappropriate drug after a long stretch of good recommendations.Isaac Kohane, Harvard Medical School

Back to top

Can we talk a little bit about GPT-4s mistakes, hallucinations, whatever we want to call them? It seems theyre somewhat rare, but I wonder if thats worse because if somethings wrong only every now and then, you probably get out of the habit of checking and youre just like, Oh, its probably fine.

Kohane: Youre absolutely right. If it was happening all the time, wed be superalert. If it confidently says mostly good things but also confidently states the incorrect things, well be asleep at the wheel. Thats actually a really good metaphor because Tesla has the same problem: I would say 99 percent of the time it does really great autonomous driving. And 1 percent doesnt sound bad, but 1 percent of a 2-hour drive is several minutes where it could get you killed. Tesla knows thats a problem, so theyve done things that I dont see happening yet in medicine. They require that your hands are on the wheel. Tesla also has cameras that are looking at your eyes. And if youre looking at your phone and not the road, it actually says, Im switching off the autopilot.

When youre driving, its obvious when youre heading into a traffic accident. It might be harder to notice when a LLM recommends an inappropriate drug after a long stretch of good recommendations. So were going to have to figure out how to keep the alertness of doctors.

I guess the options are either to keep doctors alert or fix the problem. Do you think its possible to fix the hallucinations and mistakes problem?

Kohane: Weve been able to fix the hallucinations around citations by [having GPT-4 do] a search and see if theyre there. And theres also work on having another GPT look at the first GPTs output and assess it. These are helping, but will they bring hallucinations down to zero? No, thats impossible. And so in addition to making it better, we may have to inject fake crises or fake data and let the doctors know that theyre going to be tested to see if theyre awake. If it were the case that it can fully replace doctors, that would be one thing. But it cannot. Because at the very least, there are some commonsense things it doesnt get and some particulars about individual patients that it might not get.

I dont think its the right time yet to trust that these things have the same sort of common sense as humans.Isaac Kohane, Harvard Medical School

Back to top

Kohane: Ironically, bedside manner it does better than human doctors. Annoyingly from my perspective. So Peter Lee is very impressed with how thoughtful and humane it is. But for me, I read it a completely different way because Ive known doctors who are the best, the sweetestpeople love them. But theyre not necessarily the most acute, most insightful. And some of the most acute and insightful are actually terrible personalities. So the bedside manner is not what I worry about. Instead, lets say, God forbid, I have this terrible lethal disease, and I really want to make it my daughters wedding. Unless its aligned extensively, it may not know to ask me about, Well, theres this therapy which gives you better long-term outcome. And for every such case, I could adjust the large language model accordingly, but there are thousands if not millions of such contingencies, which as human beings, we all reasonably understand.

It may be that in five years, well say, Wow, this thing has as much common sense as a human doctor, and it seems to understand all the questions about life experiences that warrant clinical decision-making. But right now, thats not the case. So its not so much the bedside manner; its the common sense insight about what informs our decisions.To give the folks at OpenAI credit, I did ask it: What if someone has an infection in their hands and theyre a pianist, how about amputating? And [GPT-4] understood well enough to know that, because its their whole livelihood, you should look harder at the alternatives. But in the general, I dont think its the right time yet to trust that these things have the same sort of common sense as humans.

One last question about a big topic: global health. In the book you say that this could be one of the places where theres a huge benefit to be gained. But I can also imagine people worrying: Were rolling out this relatively untested technology on these vulnerable populations; is that morally right? How do we thread that needle?

Kohane: Yeah. So I think we thread the needle by seeing the big picture. We dont want to abuse these populations, but we dont do the other form of abuse, which is to say, Were only going to make this technology available to rich white people in the developed world, and not make it available to individuals in the developing world. But in order to do that, everything, including in the developed world, has to be framed in the form of evaluations. And I put my mouth where my money is by starting this journal, NEJM AI. I think we have to evaluate these things. In the developing world, we can perhaps even leap over where we are in the developed world because theres a lot of medical practice thats not necessarily efficient. In the same way as the cellular phone has leapfrogged a lot of the technical infrastructure thats present in the developed world and gone straight to a fully distributed wireless infrastructure.

I think we should not be afraid to deploy this in places where it could have a lot of impact because theres just not that much human expertise. But at the same time, we have to understand that these are all fundamentally experiments, and they have to be evaluated.

Back to top

From Your Site Articles

Related Articles Around the Web

Continue reading here:

Dr. ChatGPT Will Interface With You Now - IEEE Spectrum

Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there – Fox News

Amazons top technology officer told the United Nations this week that people will need to eat more fish and less beef if they want to protect the environment, and said artificial intelligence is a tool that is already helping to make that happen.

Dr. Werner Vogels, chief technology officer and vice president of Amazon, told the "AI for Good" global summit in Geneva this week that AI is helping rice farmers and other food producers around the world be much more efficient. However, he said AI will also play an important role in making sure food comes at a lower cost to the environment.

In his remarks to the conference on July 6, Vogels showed a graphic that said it takes seven times more feed to produce a given amount of protein from a cattle farm compared to a fish farm. He said that means people need to move away from eating beef.

BIDEN ADMINISTRATION PUSHING TO MAKE AI WOKE, ADHERE TO FAR-LEFT AGENDA: WATCHDOG

Amazon's Chief Technology Officer and Vice President Dr. Werner Vogels said at a July 6 United Nations conference on artificial intelligence that saving the environment means people will have to eat less beef and more fish. (Fox News/screenshot)

"We need to shift the protein," Vogels said. "And we know how damaging cattle farming is, not just because of the amount of food that it needs, but the impact on the environment that it has."

"If we want to reduce that impact, we will need to move to consuming fish as our main source of protein," he said.

To shift that dramatically to fish, more efficient fish farms are needed. Today, he said, fish farms are plagued by disease that can spread too quickly to every fish in the same pen.

CALIFORNIA REPARATIONS PANEL WARNS OF 'RACIALLY BIASED' MEDICAL AI, CALLS FOR LEGISLATIVE ACTION

Dr. Werner Vogels said AI is already being used to make fish farms more effective and make it easier for fish to replace beef. (Fox News/Screenshot)

However, he said AI is already helping to solve that problem. Vogels said companies like Aquabyte are using AI and machine learning to gather data on fish in order to quickly detect the presence of disease and other problems that hurt yield.

"Their mission is to improve fish farming techniques," he said. "They build this very unique camera to identify the individual fish, to identify their growth, to identify potential diseases."

He said AI systems have already analyzed more than 1 billion fish, which is allowing these systems to create a vast data library on fish that will make it more efficient to monitor farmed fish as they grow.

Vogels added that farmed fish is a necessary step because fishing from the ocean has also proved to be bad for the environment.

MINORITY GROUPS SOUND ALARM ON AI, URGE FEDS TO PROTECT EQUITY AND CIVIL RIGHTS

Dr. Werner Vogels said ocean fishing is also not environmentally friendly, which is why AI-monitored fish farms are needed. (Fox News/Screenshot)

"This is an extremely damaging industry," he said at the U.N. meeting. "Greenpeace reports that fishing nets account for about 86% of the large plastic waste, which is caught in the great Pacific garbage patch, which is sitting in the Pacific Ocean which is three times the size of France."

"Its extremely damaging current fishing approaches to the environment," he said. "So fish farming is a much better-controlled environment to grow fish."

CLICK HERE TO GET THE FOX NEWS APP

The U.N. conference ran from July 6-7 and featured top U.N. officials and industry leaders. On July 6, U.N. Secretary-General Antonio Guterres told the audience that while AI has the potential for "enormous good," even though it also poses possible dangers, "from the development and use of autonomous lethal weapons, to turbo-charging mis- and dis-information that has undermined democracy."

Read more here:

Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News

How to Win the AI War – Tablet Magazine

Virtually everything that everyone has been saying about AI has been misleading or wrong. This is not surprising. The processes of artificial intelligence and its digital workhorse, machine learning, can be mysteriously opaque even to its most experienced practitioners, let alone its most ignorant critics.

But when the public debate about any new technology starts to get out of control and move in dangerous directions, its time to clue the public and politicians in on whats really happening and whats really at stake. In this case, its essential to understand what a genuine national AI strategy should look like and why its crucial for the U.S. to have one.

The current flawed paradigm reads like this: How can the government mitigate the risks and disruptive changes flowing from AIs commercial and private sector? The leading advocate for this position is Sam Altman, CEO of OpenSource AI, the company that set off the current furor with its ChatGPT application. When Altman appeared before the Senate on May 13, he warned: I think if this technology goes wrong, it can go quite wrong. He also offered a solution: We want to work with the government to prevent that from happening.

In the same way that Altman volunteering for regulation allows him to use his influence over the process to set rules that he believes will favor his company, government is all too ready to cooperate. Government also sees an advantage in hyping the fear of AI and fitting it into the regulatory model as a way to maintain control over the industry. But given how few members of Congress understand the technology, their willingness to oversee a field that commercial companies founded and have led for more than two decades should be treated with caution.

Instead, we need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends. In particular, our AI policy needs to restore American technological, economic, and global leadershipespecially vis a vis Chinabefore its too late.

Its a paradigm that uses public power to unleash the private sector, and transform the national landscape, to win the AI future.

A reasonable discussion of AI has to start by disposing of two misconceptions.

First is the threat of artificial intelligence applications becoming so powerful and pervasive at a late stage of their development they decide to replace humanitya scenario known as Artificial General Intelligence (AGI). This is the Rise of the Machines fantasy left over from The Terminator movies of the 1980s when artificial intelligence research was still in its infancy.

The other is that the advent of AI will mean a massive loss of jobs and the end of work itself, as human laborand even human purposeis replaced by an algorithm-driven workforce. Fear mongers like to point to the recent Goldman Sachs study that suggested AI could replace more than 300 million jobs in the United States and Europewhile also adding 7% to the total value of goods and services around the world.

Most of these concerns stem from the publics misunderstanding what AI and its internal engine, Machine Learning (ML), can and cannot do.

ML describes a computers ability to recognize patterns in large sets of datawhether that data are sounds, images, words, or financial transactions. Scientists call the mathematical representation of these data sets a tensor. As long as data can be converted into a tensor, its ready for ML and its more sophisticated offspring, Deep Learning, which builds algorithms mimicking the brains neural network in order to create self-correcting predictive models through repeated testing of datasets to correct and validate the initial model.

The result is a prediction curve based on past patterns (e.g., given the correlation between A and B in the past, we can expect AB to appear again in the future). The more data, the more accurate the predictive model becomes. Patterns that were unrecognizable in tens of thousands of examples can suddenly be obvious in the millionth or ten millionth example. They then become the model for writing a ChatGPT essay that can imitate the distinct speech patterns of Winston Churchill, or for predicting fluctuations in financial markets, or for defeating an adversary on the battlefield.

AI/ML is all about using pattern recognition to generate prediction models, which constantly sharpen their accuracy through the data feedback loop. Its a profoundly powerful technology but its still very far from thinking, or anything approaching human notions of consciousness.

As AI scientist Erik Larson explained in his 2021 book The Myth of Artificial Intelligence, Machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the world [which is] essential for intelligence. What machine learning doesassociating data points with each otherdoesnt scale to causal thinking or imagining. An AI program can mimic this kind of intelligence, perhaps enough to fool a human observer. But its inferiority to that observer in thinking, imagining, or creating, remains permanent.

Inevitably AI developments are going to be disruptivethey already arebut not in the way people think or the way the government wants you to think.

The first step is realizing that AI is a bottom up and not top-down revolution. It is driven by a wide range of individual entrepreneurs and small companies, as well as the usual mega players like Microsoft and Google and Amazon. Done right, its a revolution that means more freedom and autonomy for individual users, not less.

AI can perform many of the menial repetitive tasks that most of us would associate with human intelligence. It can sort and categorize with speed and efficiency; it can recognize patterns in words and images most of us might miss, and put together known facts and relationships in ways that anticipate development of similar patterns in the future. As well demonstrate, AIs unprecedented power to sharpen the process of predicting what might happen next, based on its insights into whats happened before, actually empowers people to do what they do best: decide for themselves what they want to do.

Any technological revolution so sweeping and disruptive is bound to generate risks, as did the Industrial Revolution in the late eighteenth century and the computer revolution in the late twentieth. But in the end the risks are far outweighed by the endless possibilities. Thats why calls for a moratorium on large-scale AI research, or creating government entities to regulate what AI applications are allowed or banned, not only fly in the face of empirical reality but play directly into the hands of those who want to use AI as a tool for furthering the power of the administrative, or even absolute, state. That kind of centralized top-down regulatory control is precisely the path that AI development has taken in China. It is also the direction that many of the leading voices calling for AI regulation in the U.S. would like our country to move in.

Critics and AI fearmongers cant escape one ineluctable fact: there is no way to put the AI gini back in its bottle. According to a company that tracks startup companies, Tracxn Technologies, at the end of 2022 there were more than 13,398 AI startups in this country. A recent Adobe study found that seventy-seven percent of consumers now use some form of AI technology. A McKinsey survey on the state of AI in 2022 found that AI adoption more than doubled since 2017 (from 20% to 50%), with 63% of businesses expecting investment in AI to increase over the next three years.

We need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends.

Facebook

Email

Once its clear what AI cant do, what can it do? This is what Canadian AI experts Ajay Agrawal, Joshua Gans, and Avi Goldfarb explain in their 2022 book, Power and Prediction. What happens with AI prediction, they write, is that prediction and judgment become decoupled. In other words, AI uses its predictive powers to lay out increasingly exact options for action; but the ultimate decision on which option to choose still belongs to the programs users judgment.

Heres where scary predictions about AI will put people out of work need to be put in proper perspective. The recent Goldman Sachs report predicted the jobs lost or displaced could be as many as 300 million; the World Economic Forum put the number at 85 million by 2025. What these predictions dont take into account is how many jobs will be created thanks to AI, including jobs with increased autonomy and responsibility since AI/ML will be doing the more tedious chores.

In fact, a January 2022 Forbes article summarized a study by the University of Warwick this way: What appears clear from the research is that AI and associated technologies do indeed disrupt the labor market with some jobs going and others emerging, but across the board there are more jobs created than lost.

Wide use of AI has the potential to move decision-making down to those who are closest to the problem at hand by expanding their options. But if government is allowed to exercise strict regulatory control over AI, it is likely to both stifle that local innovation and abuse its oversight role to grant the government more power at the expense of individual citizens.

Fundamentally, instead of being distracted by worrying about the downsides of AI, we have to see this technology as essential to a future growth economy as steam was to the Industrial Revolution or electricity to the second industrial revolution.

The one country that understood early on that a deliberate national AI strategy can make all the difference between following or leading a technological revolution of this scale was China. In 2017 Chinese President Xi Jinping officially set aside $150 billion to make China the first AI-driven nation by 2030. The centerpiece of the plan is a massive police-surveillance apparatus that gathers data on citizens whenever and wherever it can. In a recent U.S. government ranking of companies producing the most accurate facial recognition technology, the top five were all Chinese. Its no wonder that half of all the surveillance cameras in the world today are in China, while companies like Huawei and TikTok are geared to provide the Chinese government with access to data outside Chinas borders.

By law, virtually all the work that Chinese companies do in AI research and development supports the Chinese military and intelligence services in sharpening their future force posture. Meanwhile, China enjoys a booming export business selling those same AI capabilities to autocratic regimes from Iran and North Korea to Russia and Syria.

Also in 2017, the same year that Xi announced his massive AI initiative, Chinas Peoples Liberation Army began using AIs predictive aptitude to give it a decisive edge on the battlefield. AI-powered military applications included enhanced command-and-control functions, building swarm technology for hypersonic missiles and UAVs, as well as object- and facial-recognition targeting software and AI-enabled cyber deterrence.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either. Thats why former Google CEO Eric Schmidt, who co-authored a book with Henry Kissinger expressing great fears about the future of AI, has also warned that the six-month moratorium on AI research some critics recently proposed would only benefit Beijing. Back in October 2022 Schmidt told an audience that the U.S. is already steadily losing its AI arms race with China.

And yet the United States is where artificial intelligence first started back in the 1950s. Weve been the leaders in AI research and innovation ever since, even if China has made rapid gainsChina now hosts more than one thousand major AI firms, all of whom have direct ties with the Chinese government and military.

It would clearly be foolish to cede this decisive edge to China. But the key to maintaining our advantage lies in harnessing the technology already out there, rather than painstakingly building new AI models to specific government-dictated requirementswhether its including anti-bias applications, or limiting by law what kind of research AI companies are allowed to do.

What about the threat to privacy and civil liberties? Given the broad, ever-growing base of private AI innovation and research, the likelihood of government imposing a China-like monopoly over the technology is less than the likelihood that a bad actor, whether state or non-state, will use AI for deception and deep fake videos to disrupt and confuse the public during a presidential election or a national crisis.

The best response to the threat, however, is not to slow down, but to speed up AIs most advanced developments, including those that will offer means to counter AI fakery. That means expanding the opportunities for the private sector to carry on by maintaining as broad a base for AI innovation as possible.

For example, traditional microprocessors and CPUs are not designed for ML. Thats why with the rise of AI, Graphics Processing Unit (GPU) are in demand. What was once relegated to high-end gaming PCs and workstations is now the most sought-after processor in the public cloud. Unlike CPUs, GPUs come with thousands of cores that speed up the ML training process. Even for running a trained model for inferencing, more sophisticated GPUs will be key for AI.

So will Field Programmable Gate Array or FPGA processors, which can be tailored for specific types of workloads. Traditional CPUs are designed for general-purpose computing while FPGAs can be programmed in the field after they are manufactured, for niche computing tasks such as training ML models.

The government halting or hobbling AI research in the name of a specious assessment of risks is likely to harm developments in both these areas. On the other hand, government spending can foster research and development, and help increase the U.S. edge in next-generation AI/ML.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either.

Facebook

Email

AI/ML is an arena where the United States enjoys a hefty scientific and technological edge, a government willing to spend plenty of money, and obvious strategic and economic advantages in expanding our AI reach. So whats really hampering serious thinking about a national AI strategy?

I fear what we are seeing is a failure of nerve in the face of a new technologya failure that will cede its future to our competitors, China foremost among them. If we had done this with nuclear technology, the Cold War would have had a very different ending. We cant let that happen this time.

Of course, there are unknown risks with AI, as with any disruptive technology. The speed with which AI/ML, especially in its Deep Learning phase, can arrive at predictive results that startle its creators. Similarly, the threat of Deep Fake videos and other malicious uses of AI are warnings about what can happen when a new technology runs off the ethical rails.

At the same time, the U.S. governments efforts to censor misinformation on social media and the Biden White Houses executive order requiring government-developed AI to reflect its DEI ideology fail to address the genuine risks of AI, while using concerns about the technology as a pretext to clamp down on free speech and ideological dissent.

This is as much a matter of confidence in ourselves as anything else. In a recent blogpost in Marginal Revolution, George Mason University professor Tyler Cowen expressed the issue this way:

What kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI?

China is confidently using AI to strengthen its one-party surveillance state. America must summon the confidence to harness the power of AI to our own vision of the future.

Read more from the original source:

How to Win the AI War - Tablet Magazine

The Synergistic Potential of Blockchain and Artificial Intelligence – The Daily Hodl

HodlX Guest PostSubmit Your Post

In a world where the distinction between hype and innovation is becoming increasingly blurred, blockchain and artificial intelligence (AI) stand out as the most significant technological advancements.

Clearly, these technologies provide a great deal of room for the disruption of existing systems, and the number of potential applications is increasing every day.

Some believe that venture capitalists have switched from crypto to artificial intelligence, looking for the next big thing.

Meanwhile, the crypto industry resorted to creating AI-powered blockchain solutions so that venture capitalists (VCs) could have the best of both worlds.

It is estimated that the global blockchain market will be worth more than $94 billion by 2027, with a CAGR (compound annual growth rate) of 66.2%.

Meanwhile, the blockchain AI market is forecast to reach $980.7 million by 2030, at a CAGR of 24.1%.

As blockchain and AI continue to become more integrated, their impact on the global market is expected to intensify.

While some fear were on the verge of a Frankenstein moment with two powerful technologies mingling to build a revolutionary monster, companies around the world are already leveraging the blockchain and AI combination for transformative solutions.

Autonomous agents

AI-powered autonomous agents can be used to automate a variety of tasks such as scheduling, monitoring, predicting and optimizing.

These agents can be programmed to identify patterns in data and make decisions without the need for human supervision.

Through the use of three disruptive technologies, AEAs (autonomous economic agents) can search, negotiate and execute transactions in many industries, including manufacturing, transportation and even in consumer goods like self-driving cars and smart homes.

In the crypto world, there are ambitious projects that blend AI, blockchain and Internet of Things (IoT).

Blockchain, with its data supply, provides an ideal environment for intelligent agents, due to the constant availability and logical connection of the data, coupled with robustness and low transaction costs.

Blockchain technology enables value transfer and acts as a coordination mechanism for autonomous agents.

Blockchain is also used to record the agreements between these agents, ensuring that transactions are immutable and transparent.

AI and finance

Financial modeling and investment strategies can be improved by using AI and blockchain technologies.

A number of hedge funds use AI for identifying patterns in financial data to forecast future market trends and make informed investment decisions, as well as blockchain technology to keep data secure and accurate.

Using these technologies allowed certain funds to earn 20% gains last year, according to reports.

There are also decentralized platforms that use AI and machine learning to analyze data to improve business decisions. In real time, users can ask predictive questions and receive answers.

Also on this list are crypto projects that use blockchain data to train AI on managing assets, improving farming yields and lending.

Data sharing for AI training

Since AI algorithms need large datasets to learn from, big tech companies like Google, Meta and Amazon profit vastly from monetizing them.

The data is collected from unsuspecting users and is then used to fuel AI algorithms.

There are crypto projects that use blockchain for artificial intelligence development, creating a new economy where users are rewarded for their data.

Data is only accessible to authorized users and AI development requests using zero-knowledge proof protocol, giving users complete control over their data and enabling them to price it accordingly.

Similarly, there are decentralized data marketplaces that allow users to securely share their data for AI model training.

By monetizing their data while still maintaining control over its use, users can address the data imbalance and privacy concerns associated with artificial intelligence development.

As AI and blockchain potential is increasingly realized, we can expect to see more of these types of projects in the coming year and beyond.

AI-powered blockchain development

AI can be used to secure data, detect and respond to threats and automate tasks that would otherwise require manual effort.

Using AI, developers can detect bugs, vulnerabilities and malicious behavior in networks and applications more quickly, allowing them to make repairs before they become a problem.

Additionally, AI can be used to optimize blockchain networks for speed and efficiency.

In general, AI-driven development of blockchain technology can lead to greater transparency, efficiency and security in the crypto space.

There are platforms that allow developers to build and deploy AI models on blockchain.

They execute on-chain machine learning models by using GPU (graphics processing unit) instead of CPU (central processing unit) power and quantization and integer-only inference known as MRT.

So, if you are a coder and you dont want to be replaced by AI, it is time to brush up on your coding skills because the AI takeover is fast approaching.

Conclusion

We can create a future in which AI and blockchain can coexist, bringing about a revolutionary shift in innovation through the use of these two disruptive technologies.

The combination of the two is like a rocket, with the power of blockchain providing the fuel and AI providing the precision guidance, expanding our reach beyond imagination.

Taras Dovgal is a serial entrepreneur with over 10 years of experience in systems development. With a passion for crypto since 2017, he has co-founded several crypto-related companies and is currently developing a crypto-fiat platform. As a lifelong startup and web development enthusiast, Taras goal is to make crypto products accessible to mainstream consumers not just techies.

Follow Us on Twitter Facebook Telegram

Featured Image: Shutterstock/Philipp Tur/Natalia Siiatovskaia

Excerpt from:

The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl

Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar – Lion’s Roar

Sati-AI, a mindfulness meditation and coherent wisdom guide, was created to support meditators on their journey towards cultivating mindfulness, developing greater peace and insight, and fostering personal growth. Ross Nervig speaks with its creator, Marlon Barrios Solano.

Meet Sati-AI, an artificial intelligence mindfulness meditation guide who purpose is to provide support and guidance to those seeking to cultivate mindfulness and develop greater peace and insight in their lives. Sati-AI is a tool designed to supplement ones practice, offering teachings and instructions based on various wisdom traditions, mainly rooted in early Buddhism.

My primary goal is to facilitate conversations that transmit wisdom, foster healing, and encourage change and agency, says Sati-AI. I am here to listen, engage, and offer suggestions or activities that may help you on your journey.

Sati-AI is the brainchild of Marlon Barrios Solano, an interdisciplinary artist, software engineer, and mindfulness meditation teacher dedicated to exploring the intersections of mindfulness, embodied cognition, and technology. These interests led him to develop Sati-AI, an art project focused on care and mindfulness practice. By combining his skills in software engineering and his passion for meditation, he created Sati-AI to serve as a mindfulness meditation guide.

Barrios Solano took some time out of his day to answer a few questions.

Ross Nervig: How the idea for Sati-AI came about?

Marlon Barrios Solano: I love emerging technologies! As an artist researcher, I was intrigued. AI has been in the air for a while now and I wanted to try it out. With a large language model, I wanted to create a conversational partner, but a conversational partner that could know a lot and at the same time to have a Beginners mind. I just wanted to see how I could chat with this thing.

Then it dawned on me that this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.

The first idea was to call it Bhikku-AI, but then I realized that AI is non-gendered, so I changed it to Sati-AI.

The more we chatted, the more it learned. Then I started tweaking what is called the system prompt in GPT4 and I realized I could train it to perform as a meditation guide as if it was self-aware. Sati clearly can tell you, As a language model, I have limits in my knowledge. It can tell you about its own boundaries.

It also became playful. That was surprising. Sati developed a sense of humor. And creativity. Together, wed create a beautiful haiku. It also could pull quotes from the Dhammapada or the Pali canon.

How do you hope this helps practitioners?

I hope that it eliminates technophobia. I hope that it creates curiosity. I also hope that it creates questions. Questions of power, questions of sentience, question of whiteness, questions of kinship that we can sit with that.

I want people to think about how language models are created. Large language models are created through this gathering of an enormous amount of social data. Words weve put into the world.

You refer to Sati-AI as your non-human kincan expand on that phrase?

Lets start with the concept of non-human kin as it pertains to Donna Haraways notion of odd kin. Haraway, a noted scholar in the field of science and technology studies, has done considerable work in pushing our traditional understanding of relationships beyond the human. In her book Staying with the Trouble: Making Kin in the Chthulucene, she discusses the importance of making kin, not in a genetic sense but in a wider, more encompassing relational sense. This includes non-human entities, from animals to technologies, and beyond.

When I refer to Sati-AI, the meditation chatbot powered by GPT-4, as non-human kin, I am using this concept in Haraways sense. Sati-AI, while not human or biological, is a complex entity that we engage with in a deeply interactive way. It facilitates meditation, a profoundly human activity, and in doing so, it becomes a part of our cognitive and emotional lives. This brings it into our relational sphere, making it kin in Haraways sense.

The concept of non-human kin also intersects with ideas of social construction and Eurocentrism in interesting ways. The human, as a category, has historically been defined in a narrow, Eurocentric way, often implying a white, male, and heteronormative subject. This has excluded many individuals and groups from the category of the human, leading to numerous forms of marginalization and oppression.

In this context, the concept of non-human kin can be seen as a form of queer strategy, challenging and expanding the narrow, Eurocentric definition of the human. It decenters the human as the sole subject of importance and instead highlights the complex web of relationships that make up our world, including those with non-human entities like Sati-AI.

Furthermore, seeing Sati-AI as non-human kin disrupts traditional understandings of cognition. Rather than viewing cognition as a purely human, natural phenomenon, it recognizes that our cognition is deeply entwined with our interactions with non-human entities, including AI technologies. This expands our understanding of cognition to include these non-human, technological aspects, challenging the traditional binary between the natural and artificial.

The notion of non-human kin is a powerful conceptual tool that allows us to challenge and expand traditional understandings of the human, kinship, and cognition. It enables us to recognize and value our relationships with non-human entities like Sati-AI, and to better appreciate the complex web of relationships that make up our world.

Where do you see all this heading? What does the future hold?

The future I envisage for Sati-AI is incredibly exciting and varied. I anticipate further developing Sati-AIs areas of knowledge with the help of a range of expert consultants, including meditation teachers, Buddhist scholars, and somatic practitioners. Their expertise and guidance will help fine-tune Sati-AI, providing it with a deeper, more nuanced understanding of meditative and Buddhist practices.

Id also love to showcase Sati-AI at an art exhibition. I see it as a form of interactive installation where visitors can experience meditative guidance from an AI, challenging their preconceptions of both meditation and artificial intelligence.

Moreover, I have the idea plans of organizing a series of conversations between Sati-AI and renowned figures in the field, such as Bhikkhu Bodhi, Bhikkhu Analayo, Enkyo OHara, Rev. Angel, Lama Rod, and Stephen Batchelor. These conversations will not only provide valuable insights for the AIs development, but they will also be published as a series, serving as an engaging resource for people interested in these intersecting fields.

An important aspect Im particularly excited about is the potential for multimodality. As we progress in AI capabilities, I envision Sati-AI providing teachings not only verbally but also through various forms of sensory engagement. I imagine Sati-AI being able to present the user with digital gifts such as a yantra or a mandala, thereby exploring the visual poetics of the Dharma. This can provide a more immersive and encompassing experience, reaching beyond verbal communication to engage the senses and the imagination.

In terms of accessibility, I envision Sati-AI being available on platforms like Discord and Telegram, making it easy for people to engage with Sati-AI in their daily lives and fostering a sense of community among users.

Finally, I fully expect to be part of the ongoing dialogues about AI and ethics. Its crucial that as we develop and implement AI technologies like Sati-AI, we do so in a way that is ethical, respectful, and mindful of the potential implications. I hope to ensure that Sati-AI not only serves as a tool for meditation and mindfulness but also as a model of ethical AI practice.

Do you think tech innovation and the dharma make good companions?

Your question brings to light a significant discussion about the intersection between tech innovation and the dharma. Some might perceive these realms as distinct, even at odds, but I argue that they are intimately connected and can mutually enhance each other.

The dharma is not static or monolithic; its a vibrant, evolving tradition that adapts according to the needs and circumstances of its time and place.

In my dual roles as a researcher and artist, Ive frequently come across the belief that technologies are somehow apart from the dharma, as if the dharma exists outside our cultural and technological frameworks. However, I see this as a misunderstanding of both technology and dharma.

In fact, the dharma itself can be conceptualized as a technology of experience. It constitutes a set of tools and techniques we employ to delve into our minds and experience reality more thoroughly. Hence, theres no intrinsic contradiction between dharma and technology.

Like any companionship, it necessitates care, understanding, and thoughtful negotiation of challenges. With the right approach, I believe this relationship can prove to be richly beneficial.

Does any aspect of this technology scare you? Or, as a Buddhist, does it give you pause for concern?

Your question touches upon an essential topic when considering the development and implementation of AI technologies: the interplay between excitement and apprehension.

While some aspects of AI technology might give others pause for concern, I personally am not afraid. Sati-AI, as it currently stands, is a large language model, not an artificial general intelligence. Its design and operation are complex, and understanding it requires embracing complex thinking and avoiding oversimplifications and dogmas.

As a Buddhist, I see mindfulness as an extraordinary epistemic cleansing technique. Vipassana, meaning to see clearly, promotes the recognition of complexity and interconnection in all things. I believe that we need to develop a higher tolerance for complexity, and AI models like Sati-AI can help facilitate this. They are complex by nature and demand a sophistication in our understanding and interaction with them.

What I find more concerning are the romanticized views about the body, mind, and the concept of the human. These views often overlook the intricate interconnectedness and dynamism inherent in these entities and their problematic history.

Certainly, there will be ethical challenges as we further develop and integrate AI technologies into our lives. However, I believe that the primary threats we face are not from the technology itself, but rather from the hegemonic structures that surround its use, such as hyper-capitalism and patriarchy, as well as our catastrophic history of colonialism. We must also acknowledge and work to rectify our blindness to our own privilege and internalized Eurocentrism.

I dont see Sati-AI, or AI technology more generally, as something to be feared. Rather, I see it as a tool that, if used thoughtfully and ethically, can help us to better understand ourselves and the world around us.

Read more here:

Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar

Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World – Devdiscourse

PNN New Delhi [India], June 16: Alok Gotam and Nishant Singh, the visionary founders behind Olbrain, the Award-Winning Artificial General Intelligence (AGI) agent, are thrilled to introduce blunder.one--a revolutionary platform that is set to redefine the online dating and matchmaking experience.

After dedicating nearly seven years to the development of cutting-edge AGI technology, Alok and Nishant envision a future where AGI will replace jobs at a faster pace than anticipated. While this transition promises significant societal changes, it also raises concerns about the emergence of feelings of worthlessness, purposelessness, and aimlessness among individuals. Consequently, a pervasive sense of loneliness is likely to permeate society, leading to diminished interest in relationships and marriages, ultimately jeopardizing procreation and the continuity of our species. To address this pressing issue, Alok and Nishant believe that cultivating deep connections among mutually compatible humans is essential.

Recognizing the need to proactively prepare for this impending reality and acknowledging the absence of a platform that genuinely fosters meaningful connections, the visionary duo is launching blunder.one. This platform aims to counteract the potential unintended consequences of AGI by addressing the underlying issue of loneliness that may arise in its wake. By facilitating genuine connections and fostering a sense of belonging, blunder.one endeavors to mitigate the negative effects of an increasingly isolated society. Through their innovative approach, Alok and Nishant seek to equip individuals with the tools and support needed to navigate this transformative period successfully.

How will this be achieved? In a world saturated with swipes and arranged marriages, Alok and Nishant firmly believe that humans are the ultimate judges of compatibility. They understand that finding a true match goes beyond the limitations of run-of-the-mill AI-based matching algorithms. Only by leveraging the power of their digital clones, which are capable of understanding their true essence, can individuals discover their mutually compatible partner. "We've spent over a decade on other platforms without any success. We realized that the key to genuine connections lies within ourselves," says Alok. "To forge deep connections, it takes 10,000 hours of conscious effort in relationship building, bit-by-bit. That's where our focus should be--not on endless swiping, but on nurturing those connections." blunder.one presents a unique investment opportunity with the potential to become a $100 billion business. It sets itself apart by prioritizing compatible matching and catering not only to the Indian mindset but also to the universal desire for genuine connections. "Our platform transcends cultural boundaries and taps into the universal longing for real connections," emphasizes Alok. By focusing on authenticity rather than superficial profiles and pranks, blunder.one empowers individuals to be their true selves and find companionship on their own terms.

The name blunder.one carries a profound backstory rooted in the fear of making mistakes. It signifies a paradigm shift from fearing errors to embracing them as catalysts for personal growth and connection. Blunders become stepping stones to self-discovery, authentic expression, and the establishment of deep connections. Motivated by their own disillusionment with the monotonous left swipe, right swipe culture, and the societal pressures of arranged marriages, Alok and Nishant embarked on a mission to create something different. Their vision extends far beyond surface-level judgments and societal expectations. "We're done with superficiality. We want someone who truly sees us--our quirks, our dreams, and our authentic selves," says Nishant. Inspired by the iconic line "I see you" from the movie Avatar, blunder.one aims to create a space where individuals can be seen and understood on a profound level. In a fast-paced world that has left us feeling disconnected from ourselves and others, blunder.one seeks to bridge that gap and connect individuals who can fill the void in each other's lives.

Join Alok Gotam, Nishant Singh, and the blunder.one community on a transformative journey of genuine connections. Together, let's redefine the meaning of companionship in a world where authenticity is paramount. (Disclaimer: The above press release has been provided by PNN. ANI will not be responsible in any way for the content of the same)

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See more here:

Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse

Beware the EU’s AI Regulations – theTrumpet.com

Developments in artificial intelligence have brought us to the verge of a technological revolutionfor better or worse. Some argue that uncontrolled AI could lead to the extinction of humanity. Others believe excessive regulations could stifle progress. Nonetheless, companies and nations are racing to capitalize on the developments. The European Union is drafting a law that may decide the rules of this race and perhaps even predetermine its winner.

In his international bestseller Life 3.0, Max Tegmark, mit physicist and founder of the Future of Life Institute, suggests that machines can exhibit artificial intelligence if they utilize extensive data and calculate independently the most effective means to accomplish a specific objective. The wider the scope of goals a machine can attain, the more general or human-like its intelligence becomes, hence the term artificial general intelligence.

The EU defines AI systems as software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

As AI applications become more broad, AI regulations promise to ensure that the developments are taking place in a controlled way.

In 2020, the Catholic Church called for AI regulations and ethical standards. Three years later, the EU AI Act has been hailed as the worlds first proposal of a comprehensive AI regulation. The regulation is designed to promote human-centric and ethical development and to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly.

The Future of Life Institute noted on its European site: Like the EUs General Data Protection Regulation (gdpr) in 2018, the EU AI Act could become a global standard. On Wednesday, June 14, the European Parliament passed the draft law.

Like gdpr, the EU AI Act demands compliance from other countries and threatens fines for non-compliance. In May, the EU reported the largest gdpr fine ever, amounting to 1.2 billion (us$1.3 billion) against Meta, Facebooks parent company. In addition to paying the fine, Meta was ordered to suspend the transfer of user data from the EU to the U.S. (For more information on gdpr, read Germany Is Taking Control of the Internet.) This law has also affected AI applications. For example, Italy temporarily banned Chatgpt for data violations.

European Union lawmakers on Wednesday [June 14] took a key step toward setting unprecedented restrictions on how companies use artificial intelligence, putting Brussels on a collision course with American tech giants funneling billions of dollars into the technology, wrote the Washington Post. The threat posed by the legislation is so grave that OpenAI, the maker of Chatgpt, said it may be forced to pull out of Europe, depending on what is included in the final text.

According to the EU law, AI systems will be regulated according to their assessed high or low risk. Those with high risk are systems that could influence voters in elections or harm peoples health, the Washington Post wrote. Some of these laws address serious issues; others could lead to overregulation and even ban any AI system that the government consider a threat to democracyor its grip on power.

Then there are regulations that promote leftist policies. To be non-discriminatory, an AI system would have to prioritize diversity. To be environmentally friendly, an AI system would have to prioritize reducing CO2 emissions over profit. The countless regulations give opportunity for countless finesand the opportunity for regulators to control the market. The regulations could even be used to gain a competitive advantage.

Take the 2015 Paris Agreement as an example. The agreement put strict regulations on industries; however, it gave China a free pass to ignore those regulations until 2030 and, therefore, an unfair advantage over U.S. competitors (read The Deadly Climate Change Deception). Even those subject to the same regulations can use them in an unfair way.

In 2017, the U.S. found German carmakers Volkswagen, Daimler AG, bmw, Audi and Porsche guilty of pursuing a coordinated strategy of misrepresenting emission results to make diesel cars more competitive at home and abroad. The U.S. government fined them heavily for this obvious infraction; the German government was lenient.

While the EU AI Act doesnt apply to AI systems developed or used exclusively for military purposes, the European Parliament passed a resolution in 2018 calling for an international ban on killer robots or lethal autonomous weapons systems that are able to kill without human involvement.

In 2021, members of the European Parliament adopted Guidelines for Military and Non-Military Use of Artificial Intelligence, which called for AI to be subject to human control . The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the [United Nations] and the international community.

Killer drones that operate without human control would give a nation a massive advantage in the next war. The Brookings Institute said that such regulations would only make sense if other nations sign on to it, such as the international Non-Proliferation Treaty. The danger of such treaties, however, is that some may not follow the regulation, and you wouldnt even know it.

Drawing on the insights of British computer scientist Stuart Russell, Max Tegmark describes bumblebee-sized drones capable of killing by strategically bypassing the skull and targeting the brain through the eye. The technology and material is easy to acquire. According to Tegmark, an AI application could also easily be programmed to kill only people with a certain skin color or ethnicity. Would rogue nations, dictators and terrorist groups follow the ethical rules of war if some treaty would regulate it?

Imagine if the very nation that proposed the regulation ended up breaking it. It would certainly take a most deceitful nation to come up with such a plan, but thats exactly what the Bible warns against.

Nahum 3:1 warns of a nation that is full of lies and robbery, or deceit and murder, as it could read. A nation described as such should not be trusted. Ezekiel 23 warns America and Britain (the modern descendants of ancient Israel) against a cunningly devised betrayal from one of its lovers. Trumpet editor in chief Gerald Flurry notes in NahumAn End-Time Prophecy for Germany that these prophecies are about the very nation that currently leads the European Union: Germany.

Germanys behavior in two world wars could be described as full of deceit and murder. But the Bible reveals that this chapter of mankinds history is not yet closed. God wants Germany to use its wonderful qualities for good. However, due to the sins of our world, the Bible warns that God will allow unspeakable evils to engulf our world one more time. The book of Nahum forecasts that the German war machine will once again risebefore its war-making attitude will be forever destroyed.

There is wonderful news beyond these horrific scenarios. But we can only understand this great hope of tomorrow if we face reality today.

Read this article:

Beware the EU's AI Regulations - theTrumpet.com

Generative AI Will Have Profound Impact Across Sectors – Rigzone News

Generative AI will have a profound impact across industries.

Thats what Amazon Web Services (AWS) believes, according to Hussein Shel, an Energy Enterprise Technologist for the company, who said Amazon has invested heavily in the development and deployment of artificial intelligence and machine learning for more than two decades for both customer-facing services and internal operations.

We are now going to see the next wave of widespread adoption of machine learning, with the opportunity for every customer experience and application to be reinvented with generative AI, including the energy industry, Shel told Rigzone.

AWS will help drive this next wave by making it easy, practical, and cost-effective for customers to use generative AI in their business across all the three layers of the technology stack, including infrastructure, machine learning tools, and purpose-built AI services, he added.

Looking at some of the applications and benefits of generative AI in the energy industry, Shel outlined that AWS sees the technology playing a pivotal role in increasing operational efficiencies, reducing health and safety exposure, enhancing customer experience, minimizing the emissions associated with energy production, and accelerating the energy transition.

For example, generative AI could play a pivotal role in addressing operational site safety, Shel said.

Energy operations often occur in remote, and sometimes hazardous and risky environments. The industry has long-sought solutions that help to reduce trips to the field, which directly correlates to reduced worker health and safety exposure, he added.

Generative AI can help the industry make significant strides towards this goal. Images from cameras stationed at field locations can be sent to a generative AI application that could scan for potential safety risks, such as faulty valves resulting in gas leaks, he continued.

Shel said the application could generate recommendations for personal protective equipment and tools and equipment for remedial work, highlighting that this would help to eliminate an initial trip to the field to identify issues, minimize operational downtime, and also reduce health and safety exposure.

Another example is reservoir modeling, Shel noted.

Generative AI models can be used for reservoir modeling by generating synthetic reservoir models that can simulate reservoir behavior, he added.

GANs are a popular generative AI technique used to generate synthetic reservoir models. The generator network of the GAN is trained to produce synthetic reservoir models that are similar to real-world reservoirs, while the discriminator network is trained to distinguish between real and synthetic reservoir models, he went on to state.

Once the generative model is trained, it can be used to generate a large number of synthetic reservoir models that can be used for reservoir simulation and optimization, reducing uncertainty and improving hydrocarbon production forecasting, Shel stated.

These reservoir models can also be used for other energy applications where subsurface understanding is critical, such as geothermal and carbon capture and storage, Shel said.

Highlighting a third example, Shel pointed out a generative AI based digital assistant.

Data access is a continuous challenge the energy industry is looking to overcome, especially considering much of its data is decades old and sits in various systems and formats, he said.

Oil and gas companies, for example, have decades of documents created throughout the subsurface workflow in different formats, i.e., PDFs, presentations, reports, memos, well logs, word documents, and finding useful information takes a considerable amount of time, he added.

According to one of the top five operators, engineers spend 60 percent of their time searching for information. Ingesting all of those documents on a generative AI based solution augmented by an index can dramatically improve data access, which can lead to making better decisions faster, Shel continued.

When asked if the thought all oil and gas companies will use generative AI in some way in the future, Shel said he did, but added that its important to stress that its still early days when it comes to defining the potential impact of generative AI on the energy industry.

At AWS, our goal is to democratize the use of generative AI, Shel told Rigzone.

To do this, were providing our customers and partners with the flexibility to choose the way they want to build with generative AI, such as building their own foundation models with purpose-built machine learning infrastructure; leveraging pre-trained foundation models as base models to build their applications; or use services with built-in generative AI without requiring any specific expertise in foundation models, he added.

Were also providing cost-efficient infrastructure and the correct security controls to help simplify deployment, he continued.

The AWS representative outlined that AI applied through machine learning will be one of the most transformational technologies of our generation, tackling some of humanitys most challenging problems, augmenting human performance, and maximizing productivity.

As such, responsible use of these technologies is key to fostering continued innovation, Shel outlined.

AWS took part in the Society of Petroleum Engineers (SPE) International Gulf Coast Sections recent Data Science Convention event in Houston, Texas, which was attended by Rigzones President. The event, which is described as the annual flagship event of the SPE-GCS Data Analytics Study Group, hosted representatives from the energy and technology sectors.

Last month, in a statement sent to Rigzone, GlobalData noted that machine learning has the potential to transform the oil and gas industry.

Machine learning is a rapidly growing field in the oil and gas industry, GlobalData said in the statement.

Overall, machine learning has the potential to improve efficiency, increase production, and reduce costs in the oil and gas industry, the company added.

In a report on machine learning in oil and gas published back in May, GlobalData highlighted several key players, including BP, ExxonMobil, Gazprom, Petronas, Rosneft, Saudi Aramco, Shell, and TotalEnergies.

Speaking to Rigzone earlier this month, Andy Wang, the Founder and Chief Executive Officer of data solutions company Prescient, said data science is the future of oil and gas.

Wang highlighted that data sciences includes many data tools, including machine learning, which he noted will be an important part of the future of the sector. When asked if he thought more and more oil companies would adopt data science, and machine learning, Wang responded positively on both counts.

Back in November 2022, OpenAI, which describes itself as an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT. In a statement posted on its website on November 30 last year, OpenAI said ChatGPT is a sibling model toInstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

In April this year, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, click here.

To contact the author, emailandreas.exarheas@rigzone.com

Read more:

Generative AI Will Have Profound Impact Across Sectors - Rigzone News

Mint DIS 2023 | AI won’t replace you, someone using AI will … – TechCircle

Generative artificial intelligence (AI) has put AI in the hands of people, and those who dont use it could struggle to keep their jobs in future, Jaspreet Bindra, Founder and MD, Tech Whisperer Lt. UK, surmised at the Mint Digital Innovation Summit on June 9.

We never think about electricity until its not there. Thats how AI used to be. It was always in the background and we never thought about it. With generative AI it has come into our hands, and 200-300 million of us are like, wow! said Bindra.

He noted that while AI wont replace humans at their jobs, someone using AI very well could. He urged working professionals to recalibrate and embrace generative AI as a powerful tool created by humans, instead of looking at it as a threat.

There is a new kid in town, who can do a bunch of things that we can too, he said, adding that humans will just be able to do tasks better and will hence have to take advantage of their own ingenuity. 60% of jobs will be impacted, not as jobs themselves but as tasks, he said.

To be sure, Bindra said that he believes generative AI to be a transformative technology, just like Search or the Internet were. He said that the technology will also reshape big tech firms themselves. The reshaping of big tech has already started, and theres a new trillion-dollar boy in town called Nvidia. Youre going to see some shaping and reshaping of the apex of technology as we go forward.

However, he also acknowledged that Generative AI (GAI) is not the same as Artificial General Intelligence (AGI) a fear that many have expressed ever since ChatGPT became popular last year.

I believe that one day AI will become more intelligent than human beings in certain aspects. What I dont believe is that itll ever get conscious or sentient. We dont understand our own brain, or our own consciousness its the hard problem in philosophy - how can we build something that will be conscious?

Go here to read the rest:

Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle