Page 1,194«..1020..1,1931,1941,1951,196..1,2001,210..»

Can We Mine the World’s Deep Ocean Without Destroying It? – Yale Environment 360

Few people know the deep ocean as intimately as Lisa Levin, an ecologist at the Scripps Institution of Oceanography. Not content with doing pure science, Levin, who has participated in more than 40 oceanographic expeditions, co-founded the Deep-Ocean Stewardship Initiative, a global network of more than 2,000 scientists, economists, and legal experts that seeks to advise policymakers on managing the oceans depths.

Of particular concern to Levin now is the prospect of deep-sea mining. The tiny island nation of Nauru has notified the International Seabed Authority on behalf of its Canadian partner, the Metals Company, of its intent to seek a permit to mine in the Clarion-Clipperton Zone, a 1.7- million-square-mile region of the Pacific where polymetatallic nodules are scattered that have high concentrations of cobalt and other valuable minerals.

The ISA, formed by the U.N. in 1994, is required to issue mining codes that would regulate deep-sea mining by July 9. If it fails to do so, some scientists and environmentalists fear that an controversial rule may allow mining to begin nonetheless. While Levin told Yale Environment 360 that she doubts well see it happen this year, she too worries that pressure is mounting to start mining soon.

Mining companies argue that land-based sources for these metals are running out and that they are critically needed for green technologies like producing batteries for electric vehicles and manufacturing solar panels and wind turbines. They also claim that mining in the deep sea will be less environmentally damaging than land-based mining.

In an interview with e360, Levin disagreed. Its a highly destructive process, she said. People talk about sustainable mining. I think its an oxymoron in the deep sea. But society has to decide should we do it, and is it worth the cost?

Levin collects a rock gathered by a remotely operated vehicle. Courtesy of the Schmidt Ocean Institute

Yale Environment 360: The deep sea is viewed by many as a kind of watery desert. There may be a few creatures floating around down there, but people dont think of it as a thriving ecosystem. Is that just wrong?

Lisa Levin: There are actually surprisingly diverse and rich ecosystems, but sometimes the organisms are small, only a few millimeters in size. For example, in the nodule zone that theyre interested in mining, most of the animals are very, very small. We may think it is unpopulated, because we dont see many big charismatic organisms there.

Subscribe to the E360 Newsletter for weekly updates delivered to your inbox. Sign Up.

e360: People may say, This is a marginal area, why do we have to worry about it?

Levin: You could go to a very remote section of the Amazon rainforest that nobody has explored and say, Why is it important? There are actually many parallels with the rainforest. One is that the animals in the deep sea can live for a very long time. Some fish can live for hundreds of years. Some of the invertebrates, like corals or sponges, live for thousands of years.

Like the rainforest, the deep sea is also extremely vulnerable to physical disturbance. Once the ocean bottom is hit by a trawl [fishing] net, youve lost four or five thousand years of life for many corals and sponges. Around 15 percent of our continental margins have already been trawled, leaving vast piles of rubble where deep sea corals once thrived.

e360: There has been talk of deep-sea mining for decades now. But it hasnt started yet. Some believe that deep-sea mining may begin as early as this summer, if the ISA does not meet the July deadline for finalizing its rules for environmental regulation. Do you share the concerns that mining is imminent?

Levin: The only operation that could start mining very soon is the Metals Company and Nauru. I dont know the state of their technology, but I would guess that they are not ready for large-scale mining at this point. I doubt that they would be in the water mining commercially this year [even if the ISA gives them approval].

e360: Still, we appear to be edging ever closer to seabed mining. Do we know enough yet to start doing that?

Levin: In fact, we know very little about what the impacts of deep-sea mining will be. Weve probably mapped 20 to 25 percent of the ocean floor, but weve only studied the ecology of a small fraction of that. We need to know whats there in terms of species, and we need to know what well lose if we destroy these areas by mining. We also need to know what genetic resources are there, what fisheries services will be lost, how much carbon is sequestered. But we simply dont have that knowledge yet.

Mining exploration areas, in red, in the Clarion-Clipperton Zone. Horizon

e360: The industry argues that they can mine with minimal impact. Are they wrong?

Levin: Its a highly destructive process. People talk about sustainable mining. I think its an oxymoron in the deep sea. But society has to decide should we do it, and is it worth the cost? At the moment, there are 15 to 20 governments that have advocated for a moratorium [on deep-sea mining]. But there are 167 member states in the ISA. We dont know yet what they will decide.

e360: Some mining companies say that metals from the ocean floor are needed for the rapid expansion of green technologies. They also claim that deep-sea mining is less destructive than land mining. Is there a green argument for this?

Levin: Land mining is very destructive. But the footprint is much, much smaller. I mean the largest coal mine in Germany is less than half the size of the area that would be mined for polymetallic nodules in the Clarion-Clipperton Zone in one year by one contractor.

The nodules are concentrated in a thin layer at the top of the seabed only 4 inches deep. So you are talking about stripping the sea bottom of many, many thousands of square miles potentially. The same with the seamounts [undersea mountains], which are also targeted. Their ferro-manganese crusts are only a few centimeters thick, so they have to tear up [large areas to mine] this superficial feature.

e360: Another area that has been targeted for mining is hydrothermal vents [fissures on the seabed from which geothermally heated water discharges].

Levin: Thats right, there are ISA exploration contracts on the mid-Atlantic Ocean ridge and the southwest Indian Ocean ridge and there are also hundreds of claims that have been made in west Pacific island nations which have leased their waters for mining hydrothermal vents [which contain silver, gold, and other minerals]. The species that live there are highly adapted just to the vent ecosystems, and many are endemic and only live at a handful of vents, so there is concern that they run the risk of extinction.

e360: Youve said that oil and gas production is getting deeper and deeper. Is that a concern?

Levin: Were fishing ever deeper, and were also drilling ever deeper. Look at the Deepwater Horizon which blew out and spilled tremendous amounts of oil in the Gulf of Mexico at around 1,500 meters in 2010. It damaged an area where the biodiversity had not yet even been described by science.

A mining exploration vessel launches an underwater vehicle equipped to collect sediment from the sea bottom. Global Sea Mineral Resources

e360: One of the risks of deep-sea mining is that you will be stirring up the bottom sediment. Why is that a problem?

Levin: By mining the nodules, sediment plumes will be released that may impact large areas of the ocean. These particles in what is normally quite clear water can clog the feeding apparatus [of deepwater organisms]; it can be mistaken for food; it can release contaminants, radioactive and metal contaminants, as well as carbon. A lot of animals use bioluminescence to communicate, find mates, locate prey. These particles could change light transmission in the water and interfere with their ability to function.

The race for EV parts leads to risky deep-ocean mining. Read more.

And it is not just the resuspended particles [at the ocean bottom]. After the ore is removed from the sediment on the [mining] ship, they have what they call return water which will be full of particles and contaminants which have to be put down somewhere. Its not clear where that is going to go. That could impact vertically migrating fishes [higher up in the water column] in ways that can ultimately affect tuna and other important fisheries that exist in the area.

e360: A lot of carbon gets sequestered at the bottom of the sea. A recent study showed that bottom trawling [for fishing] releases as much carbon dioxide annually as global aviation. Is deep-sea mining likely to have a similar effect?

Levin: Probably not. Its hard to know. Bottom-trawling is usually on continental margins where the carbon accumulation rates are very high. Whereas in the abyssal plain [where the polymetallic nodules may be mined], the carbon content of that sediment is relatively low. So its hard to know if the amount of carbon released would have a big effect on the carbon cycle and our carbon budgets and CO2 emissions.

e360: There is the expression out of sight, out of mind. Why should people care about the deep sea, which few have seen and seems so remote from our lives?

Levin: We always have a very anthropocentric answer to that question. There is an existence value to knowing that this biodiversity is out there even if we are not using it. Why should people care? We should care because it is there. And it is relatively pristine compared to other ecosystems that we have on land.

There are also all the reasons having to do with global cycles, nutrient regeneration that allows the productivity for fisheries, all the carbon cycling that keeps the planet healthy. The ocean and the deep ocean take up most of the excess heat and about a third of the excess carbon dioxide. Our climate wouldnt be livable if we didnt have a healthy ocean doing all of that, and the life of the ocean is a big part of that cycle. There is also the future potential of the ocean to provide solutions to problems we already know about, like climate change, but also other problems that we dont have yet, like illnesses of the future that we will need solutions for.

We are at a really pivotal time. We still havent destroyed most of [the ocean ecosystem]. I think we can make good decisions going forward and keep a lot of it pristine and functional for the planet.

A large polymetallic nodule gathered from the seafloor. Global Sea Mineral Resources

e360: Who will be making these decisions?

Levin: One of the problems may be that there are so many different agencies, each with their own little niches of responsibility, but they are not always talking with each other, and they are making independent decisions. There are [international] conventions on biodiversity, and conventions that address climate, and conventions that address whales and whale conservation, and conventions that address endangered species, and some that do fishing they are all separate. And yet, they shouldnt be managed separately, because every single thing I mentioned is affected by every other thing I mentioned. It is all interconnected, and yet we dont manage it in any kind of interconnected way. That highly sectoral feature of the U.N. is really problematic for the ocean.

For your phone and EV, a cobalt supply chain to a hell on Earth. Read more.

e360: Are you hopeful nevertheless?

Levin: Im fairly hopeful. We have a lot of opportunities to make good policy. And I think there is increasing awareness. More people understand the critical role of the ocean now. The science is so much better than it used to be. There are many NGOs and science networks working for good decision-making. Its clear that interest in protecting the sea and the deep sea in particular is growing.

Read the original here:
Can We Mine the World's Deep Ocean Without Destroying It? - Yale Environment 360

Read More..

How YouTube may help Google beat OpenAI in artificial intelligence war – Times of India

OpenAI

, the company that developed

chatbot, got an early lead in generative artificial intelligence (AI) technology. However, a report has now claimed that its probable toughest rival

is pretty much well equipped to compete with it, especially after it upgraded its

chatbot with a new machine-learning model post I/O 2023.

Google announced PaLM 2, its state-of-the-art language model with improved multilingual, reasoning and coding capabilities in May this year. The company has since been making regular updates in problem solving capabilities of its AI chatbot Bard.

Citing a person with knowledge of the situation, the report said that Googles researchers have been using YouTube to develop its next large-language model,

. It is to be noted that Google CEO Sundar Pichai talked about how Google brought together DeepMind and

to make Google DeepMind, which is using computational resources and building more capable systems, safely and responsibly.

Gemini was created from the ground up to be multimodal, highly efficient at tool and API integrations and built to enable future innovations, like memory and planning, he said, noting that it offers impressive multimodal capabilities not seen in prior models.

OpenAI reportedly trained its AI models on YouTube

YouTube as a data source is a hot property, and the report also claimed that OpenAI secretly used data from the platform to train some of its AI models, one person with direct knowledge of the effort was quoted as saying.

It is to be noted that YouTubes terms of service forbid using content for anything other than personal, non-commercial use. However, it is known that in the AI industry everyone is scraping the web constantly.

The development comes about two and half months after media reports said that Google used ChatGPT data to train Bard. It was also claimed that Google used data from OpenAIs ChatGPT, scraped from a website called ShareGPT, to train Bard.

The Pixel smartphone maker, however, denied saying Bard is not trained on any data from ShareGPT or ChatGPT.

View post:
How YouTube may help Google beat OpenAI in artificial intelligence war - Times of India

Read More..

6 Ways to Ease Flight and Turbulence Anxiety – The New York Times

On a recent flight to Chicago, Allison Levy said she was white-knuckling the armrest as the plane rumbled and shook for brief periods of time.

Ms. Levy, 47, who lives in Arlington, Va., started to take deep breaths and tried to reassure herself: Its like a bumpy road its not a big deal.

But, she added, if I knew the person next to me, Id definitely grip their thigh.

Airplane turbulence, which is usually caused by large changes in airflow in the Earths upper atmosphere, is generally a minor nuisance.

But this year alone, there have been multiple instances of severe turbulence on flights that have led to dozens of passenger injuries. And scientists have warned that we may have bumpier flights in the years ahead because of elevated carbon dioxide emissions that are warming the atmosphere, which can alter the speed and direction of the wind.

This is unwelcome news for everyone, especially those of us who are already scared of flying, like Ms. Levy.

Here are several ways to help calm your nerves if youre eager to travel but dreading potential turbulence.

Turbulence is not usually a cause for concern. Its far more common to encounter low to moderate turbulence than the severe kind that throws heavy drink carts into the air.

While pilots can ease most turbulence, it is still unavoidable or unexpected for some flights, but planes are designed to safely withstand the impacts, the Air Line Pilots Association, a prominent pilots union, said in a statement.

It may also help to know that, according to a 2020 study, it has never been safer to travel on a commercial airline.

Passenger injuries from turbulence are rare. In the 13 years spanning 2009 to 2022, for example, a total of 34 passengers were seriously injured because of turbulence, according to data from the Federal Aviation Administration. And the last turbulence-related death on a major airline happened more than 25 years ago, the National Transportation Safety Board said in a 2021 report.

Traveling by plane is much safer than traveling by car: The odds of dying during a commercial flight in the United States are too small to calculate, according to the National Safety Council. Meanwhile, the chances of dying in a motor vehicle crash are 1 in 93, the nonprofit advocacy group says.

It might be tempting to reach for an alcoholic beverage in the hopes of calming your nerves, but remember that what you eat and drink impacts your anxiety and how you are feeling, said Dr. Uma Naidoo, the director of nutritional and metabolic psychiatry at Massachusetts General Hospital and the author of This Is Your Brain on Food.

Too much alcohol is dehydrating and can also produce feelings of nausea. Thats a bad combination with turbulence, which can leave passengers queasy, too.

Staying hydrated, perhaps skipping the coffee or wine on the plane, can help create a sense of calm, Dr. Naidoo said.

If turbulence (or the mere thought of it) makes your heart race, taking steps to control your breathing can be a simple and powerful way to help soothe your body, Dr. Naidoo said. One example is 4-4-8 breathing: Take a breath in for four counts, hold your breath for four counts and then exhale for eight counts. Repeat.

As an alternative, you can also try belly breathing or controlled breathing.

With practice, they can become a normal part of your response to stress and anxiety, Dr. Naidoo said.

Some travelers might find it helpful to try exposure therapy, which involves gradually facing specific fears and anxieties until they feel less frightening.

Brenda K. Wiederhold, a psychologist in San Diego, regularly sees patients who have an intense fear of flying. For more than two decades, she has used both real-life scenarios and virtual reality to help expose patients to various scenarios like airplane turbulence.

Turbulence is akin to rolling waves, she tells her clients. You dont think, Oh my goodness, this boat is going to crash! she said. Instead, you think: There are waves today.

Other patients, including some with anxiety disorders, may benefit from medication like Xanax, but such a drug should be taken only under supervision of a doctor.

Strong turbulence can sometimes appear without warning, a phenomenon known as clear air turbulence. The Federal Aviation Administration advises passengers to wear their seatbelt at all times, not just when the seatbelt light is on, and to secure children under the age of 2 in an F.A.A.-approved car seat or restraint device to reduce the possibility of injuries during unexpected turbulence.

The biggest danger is not being secured, said Kristie Koerbel, who has worked as a flight attendant for 21 years. If you are seated with your seatbelt fastened, there is no reason to fear turbulence.

Where you sit can make a difference. Passengers in window seats are less likely to be struck by any projectile objects, suitcases falling out of overhead bins or ceiling tiles coming down, said Sara Nelson, the president of the largest flight attendant union. In addition, seats near the front and next to the wing will typically be less bumpy compared to the back of the aircraft. In severe turbulence, though, where youre sitting wont make a difference, Ms. Nelson said.

Think about what calms you in general and try to do some of those activities on the flight. For her trip to Chicago, Ms. Levy brought a sketchbook for doodling, her favorite music and some crossword puzzles. She also spoke to her doctor about taking a low dose of Xanax (though she isnt convinced that it helped).

Finally, keep an eye on the weather. Thunderstorms typically develop in the warmer months of spring, summer and fall, according to the National Weather Service, and can create turbulence. If you have the flexibility to postpone your flight, you might try for a day with clearer skies in the hopes of a smoother ride.

And remember, the plane is not going to take off if its not safe, Ms. Nelson said.

Visit link:
6 Ways to Ease Flight and Turbulence Anxiety - The New York Times

Read More..

The Synergistic Potential of Blockchain and Artificial Intelligence – The Daily Hodl

HodlX Guest PostSubmit Your Post

In a world where the distinction between hype and innovation is becoming increasingly blurred, blockchain and artificial intelligence (AI) stand out as the most significant technological advancements.

Clearly, these technologies provide a great deal of room for the disruption of existing systems, and the number of potential applications is increasing every day.

Some believe that venture capitalists have switched from crypto to artificial intelligence, looking for the next big thing.

Meanwhile, the crypto industry resorted to creating AI-powered blockchain solutions so that venture capitalists (VCs) could have the best of both worlds.

It is estimated that the global blockchain market will be worth more than $94 billion by 2027, with a CAGR (compound annual growth rate) of 66.2%.

Meanwhile, the blockchain AI market is forecast to reach $980.7 million by 2030, at a CAGR of 24.1%.

As blockchain and AI continue to become more integrated, their impact on the global market is expected to intensify.

While some fear were on the verge of a Frankenstein moment with two powerful technologies mingling to build a revolutionary monster, companies around the world are already leveraging the blockchain and AI combination for transformative solutions.

Autonomous agents

AI-powered autonomous agents can be used to automate a variety of tasks such as scheduling, monitoring, predicting and optimizing.

These agents can be programmed to identify patterns in data and make decisions without the need for human supervision.

Through the use of three disruptive technologies, AEAs (autonomous economic agents) can search, negotiate and execute transactions in many industries, including manufacturing, transportation and even in consumer goods like self-driving cars and smart homes.

In the crypto world, there are ambitious projects that blend AI, blockchain and Internet of Things (IoT).

Blockchain, with its data supply, provides an ideal environment for intelligent agents, due to the constant availability and logical connection of the data, coupled with robustness and low transaction costs.

Blockchain technology enables value transfer and acts as a coordination mechanism for autonomous agents.

Blockchain is also used to record the agreements between these agents, ensuring that transactions are immutable and transparent.

AI and finance

Financial modeling and investment strategies can be improved by using AI and blockchain technologies.

A number of hedge funds use AI for identifying patterns in financial data to forecast future market trends and make informed investment decisions, as well as blockchain technology to keep data secure and accurate.

Using these technologies allowed certain funds to earn 20% gains last year, according to reports.

There are also decentralized platforms that use AI and machine learning to analyze data to improve business decisions. In real time, users can ask predictive questions and receive answers.

Also on this list are crypto projects that use blockchain data to train AI on managing assets, improving farming yields and lending.

Data sharing for AI training

Since AI algorithms need large datasets to learn from, big tech companies like Google, Meta and Amazon profit vastly from monetizing them.

The data is collected from unsuspecting users and is then used to fuel AI algorithms.

There are crypto projects that use blockchain for artificial intelligence development, creating a new economy where users are rewarded for their data.

Data is only accessible to authorized users and AI development requests using zero-knowledge proof protocol, giving users complete control over their data and enabling them to price it accordingly.

Similarly, there are decentralized data marketplaces that allow users to securely share their data for AI model training.

By monetizing their data while still maintaining control over its use, users can address the data imbalance and privacy concerns associated with artificial intelligence development.

As AI and blockchain potential is increasingly realized, we can expect to see more of these types of projects in the coming year and beyond.

AI-powered blockchain development

AI can be used to secure data, detect and respond to threats and automate tasks that would otherwise require manual effort.

Using AI, developers can detect bugs, vulnerabilities and malicious behavior in networks and applications more quickly, allowing them to make repairs before they become a problem.

Additionally, AI can be used to optimize blockchain networks for speed and efficiency.

In general, AI-driven development of blockchain technology can lead to greater transparency, efficiency and security in the crypto space.

There are platforms that allow developers to build and deploy AI models on blockchain.

They execute on-chain machine learning models by using GPU (graphics processing unit) instead of CPU (central processing unit) power and quantization and integer-only inference known as MRT.

So, if you are a coder and you dont want to be replaced by AI, it is time to brush up on your coding skills because the AI takeover is fast approaching.

Conclusion

We can create a future in which AI and blockchain can coexist, bringing about a revolutionary shift in innovation through the use of these two disruptive technologies.

The combination of the two is like a rocket, with the power of blockchain providing the fuel and AI providing the precision guidance, expanding our reach beyond imagination.

Taras Dovgal is a serial entrepreneur with over 10 years of experience in systems development. With a passion for crypto since 2017, he has co-founded several crypto-related companies and is currently developing a crypto-fiat platform. As a lifelong startup and web development enthusiast, Taras goal is to make crypto products accessible to mainstream consumers not just techies.

Follow Us on Twitter Facebook Telegram

Featured Image: Shutterstock/Philipp Tur/Natalia Siiatovskaia

Excerpt from:

The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl

Read More..

How to Win the AI War – Tablet Magazine

Virtually everything that everyone has been saying about AI has been misleading or wrong. This is not surprising. The processes of artificial intelligence and its digital workhorse, machine learning, can be mysteriously opaque even to its most experienced practitioners, let alone its most ignorant critics.

But when the public debate about any new technology starts to get out of control and move in dangerous directions, its time to clue the public and politicians in on whats really happening and whats really at stake. In this case, its essential to understand what a genuine national AI strategy should look like and why its crucial for the U.S. to have one.

The current flawed paradigm reads like this: How can the government mitigate the risks and disruptive changes flowing from AIs commercial and private sector? The leading advocate for this position is Sam Altman, CEO of OpenSource AI, the company that set off the current furor with its ChatGPT application. When Altman appeared before the Senate on May 13, he warned: I think if this technology goes wrong, it can go quite wrong. He also offered a solution: We want to work with the government to prevent that from happening.

In the same way that Altman volunteering for regulation allows him to use his influence over the process to set rules that he believes will favor his company, government is all too ready to cooperate. Government also sees an advantage in hyping the fear of AI and fitting it into the regulatory model as a way to maintain control over the industry. But given how few members of Congress understand the technology, their willingness to oversee a field that commercial companies founded and have led for more than two decades should be treated with caution.

Instead, we need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends. In particular, our AI policy needs to restore American technological, economic, and global leadershipespecially vis a vis Chinabefore its too late.

Its a paradigm that uses public power to unleash the private sector, and transform the national landscape, to win the AI future.

A reasonable discussion of AI has to start by disposing of two misconceptions.

First is the threat of artificial intelligence applications becoming so powerful and pervasive at a late stage of their development they decide to replace humanitya scenario known as Artificial General Intelligence (AGI). This is the Rise of the Machines fantasy left over from The Terminator movies of the 1980s when artificial intelligence research was still in its infancy.

The other is that the advent of AI will mean a massive loss of jobs and the end of work itself, as human laborand even human purposeis replaced by an algorithm-driven workforce. Fear mongers like to point to the recent Goldman Sachs study that suggested AI could replace more than 300 million jobs in the United States and Europewhile also adding 7% to the total value of goods and services around the world.

Most of these concerns stem from the publics misunderstanding what AI and its internal engine, Machine Learning (ML), can and cannot do.

ML describes a computers ability to recognize patterns in large sets of datawhether that data are sounds, images, words, or financial transactions. Scientists call the mathematical representation of these data sets a tensor. As long as data can be converted into a tensor, its ready for ML and its more sophisticated offspring, Deep Learning, which builds algorithms mimicking the brains neural network in order to create self-correcting predictive models through repeated testing of datasets to correct and validate the initial model.

The result is a prediction curve based on past patterns (e.g., given the correlation between A and B in the past, we can expect AB to appear again in the future). The more data, the more accurate the predictive model becomes. Patterns that were unrecognizable in tens of thousands of examples can suddenly be obvious in the millionth or ten millionth example. They then become the model for writing a ChatGPT essay that can imitate the distinct speech patterns of Winston Churchill, or for predicting fluctuations in financial markets, or for defeating an adversary on the battlefield.

AI/ML is all about using pattern recognition to generate prediction models, which constantly sharpen their accuracy through the data feedback loop. Its a profoundly powerful technology but its still very far from thinking, or anything approaching human notions of consciousness.

As AI scientist Erik Larson explained in his 2021 book The Myth of Artificial Intelligence, Machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the world [which is] essential for intelligence. What machine learning doesassociating data points with each otherdoesnt scale to causal thinking or imagining. An AI program can mimic this kind of intelligence, perhaps enough to fool a human observer. But its inferiority to that observer in thinking, imagining, or creating, remains permanent.

Inevitably AI developments are going to be disruptivethey already arebut not in the way people think or the way the government wants you to think.

The first step is realizing that AI is a bottom up and not top-down revolution. It is driven by a wide range of individual entrepreneurs and small companies, as well as the usual mega players like Microsoft and Google and Amazon. Done right, its a revolution that means more freedom and autonomy for individual users, not less.

AI can perform many of the menial repetitive tasks that most of us would associate with human intelligence. It can sort and categorize with speed and efficiency; it can recognize patterns in words and images most of us might miss, and put together known facts and relationships in ways that anticipate development of similar patterns in the future. As well demonstrate, AIs unprecedented power to sharpen the process of predicting what might happen next, based on its insights into whats happened before, actually empowers people to do what they do best: decide for themselves what they want to do.

Any technological revolution so sweeping and disruptive is bound to generate risks, as did the Industrial Revolution in the late eighteenth century and the computer revolution in the late twentieth. But in the end the risks are far outweighed by the endless possibilities. Thats why calls for a moratorium on large-scale AI research, or creating government entities to regulate what AI applications are allowed or banned, not only fly in the face of empirical reality but play directly into the hands of those who want to use AI as a tool for furthering the power of the administrative, or even absolute, state. That kind of centralized top-down regulatory control is precisely the path that AI development has taken in China. It is also the direction that many of the leading voices calling for AI regulation in the U.S. would like our country to move in.

Critics and AI fearmongers cant escape one ineluctable fact: there is no way to put the AI gini back in its bottle. According to a company that tracks startup companies, Tracxn Technologies, at the end of 2022 there were more than 13,398 AI startups in this country. A recent Adobe study found that seventy-seven percent of consumers now use some form of AI technology. A McKinsey survey on the state of AI in 2022 found that AI adoption more than doubled since 2017 (from 20% to 50%), with 63% of businesses expecting investment in AI to increase over the next three years.

We need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends.

Facebook

Email

Once its clear what AI cant do, what can it do? This is what Canadian AI experts Ajay Agrawal, Joshua Gans, and Avi Goldfarb explain in their 2022 book, Power and Prediction. What happens with AI prediction, they write, is that prediction and judgment become decoupled. In other words, AI uses its predictive powers to lay out increasingly exact options for action; but the ultimate decision on which option to choose still belongs to the programs users judgment.

Heres where scary predictions about AI will put people out of work need to be put in proper perspective. The recent Goldman Sachs report predicted the jobs lost or displaced could be as many as 300 million; the World Economic Forum put the number at 85 million by 2025. What these predictions dont take into account is how many jobs will be created thanks to AI, including jobs with increased autonomy and responsibility since AI/ML will be doing the more tedious chores.

In fact, a January 2022 Forbes article summarized a study by the University of Warwick this way: What appears clear from the research is that AI and associated technologies do indeed disrupt the labor market with some jobs going and others emerging, but across the board there are more jobs created than lost.

Wide use of AI has the potential to move decision-making down to those who are closest to the problem at hand by expanding their options. But if government is allowed to exercise strict regulatory control over AI, it is likely to both stifle that local innovation and abuse its oversight role to grant the government more power at the expense of individual citizens.

Fundamentally, instead of being distracted by worrying about the downsides of AI, we have to see this technology as essential to a future growth economy as steam was to the Industrial Revolution or electricity to the second industrial revolution.

The one country that understood early on that a deliberate national AI strategy can make all the difference between following or leading a technological revolution of this scale was China. In 2017 Chinese President Xi Jinping officially set aside $150 billion to make China the first AI-driven nation by 2030. The centerpiece of the plan is a massive police-surveillance apparatus that gathers data on citizens whenever and wherever it can. In a recent U.S. government ranking of companies producing the most accurate facial recognition technology, the top five were all Chinese. Its no wonder that half of all the surveillance cameras in the world today are in China, while companies like Huawei and TikTok are geared to provide the Chinese government with access to data outside Chinas borders.

By law, virtually all the work that Chinese companies do in AI research and development supports the Chinese military and intelligence services in sharpening their future force posture. Meanwhile, China enjoys a booming export business selling those same AI capabilities to autocratic regimes from Iran and North Korea to Russia and Syria.

Also in 2017, the same year that Xi announced his massive AI initiative, Chinas Peoples Liberation Army began using AIs predictive aptitude to give it a decisive edge on the battlefield. AI-powered military applications included enhanced command-and-control functions, building swarm technology for hypersonic missiles and UAVs, as well as object- and facial-recognition targeting software and AI-enabled cyber deterrence.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either. Thats why former Google CEO Eric Schmidt, who co-authored a book with Henry Kissinger expressing great fears about the future of AI, has also warned that the six-month moratorium on AI research some critics recently proposed would only benefit Beijing. Back in October 2022 Schmidt told an audience that the U.S. is already steadily losing its AI arms race with China.

And yet the United States is where artificial intelligence first started back in the 1950s. Weve been the leaders in AI research and innovation ever since, even if China has made rapid gainsChina now hosts more than one thousand major AI firms, all of whom have direct ties with the Chinese government and military.

It would clearly be foolish to cede this decisive edge to China. But the key to maintaining our advantage lies in harnessing the technology already out there, rather than painstakingly building new AI models to specific government-dictated requirementswhether its including anti-bias applications, or limiting by law what kind of research AI companies are allowed to do.

What about the threat to privacy and civil liberties? Given the broad, ever-growing base of private AI innovation and research, the likelihood of government imposing a China-like monopoly over the technology is less than the likelihood that a bad actor, whether state or non-state, will use AI for deception and deep fake videos to disrupt and confuse the public during a presidential election or a national crisis.

The best response to the threat, however, is not to slow down, but to speed up AIs most advanced developments, including those that will offer means to counter AI fakery. That means expanding the opportunities for the private sector to carry on by maintaining as broad a base for AI innovation as possible.

For example, traditional microprocessors and CPUs are not designed for ML. Thats why with the rise of AI, Graphics Processing Unit (GPU) are in demand. What was once relegated to high-end gaming PCs and workstations is now the most sought-after processor in the public cloud. Unlike CPUs, GPUs come with thousands of cores that speed up the ML training process. Even for running a trained model for inferencing, more sophisticated GPUs will be key for AI.

So will Field Programmable Gate Array or FPGA processors, which can be tailored for specific types of workloads. Traditional CPUs are designed for general-purpose computing while FPGAs can be programmed in the field after they are manufactured, for niche computing tasks such as training ML models.

The government halting or hobbling AI research in the name of a specious assessment of risks is likely to harm developments in both these areas. On the other hand, government spending can foster research and development, and help increase the U.S. edge in next-generation AI/ML.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either.

Facebook

Email

AI/ML is an arena where the United States enjoys a hefty scientific and technological edge, a government willing to spend plenty of money, and obvious strategic and economic advantages in expanding our AI reach. So whats really hampering serious thinking about a national AI strategy?

I fear what we are seeing is a failure of nerve in the face of a new technologya failure that will cede its future to our competitors, China foremost among them. If we had done this with nuclear technology, the Cold War would have had a very different ending. We cant let that happen this time.

Of course, there are unknown risks with AI, as with any disruptive technology. The speed with which AI/ML, especially in its Deep Learning phase, can arrive at predictive results that startle its creators. Similarly, the threat of Deep Fake videos and other malicious uses of AI are warnings about what can happen when a new technology runs off the ethical rails.

At the same time, the U.S. governments efforts to censor misinformation on social media and the Biden White Houses executive order requiring government-developed AI to reflect its DEI ideology fail to address the genuine risks of AI, while using concerns about the technology as a pretext to clamp down on free speech and ideological dissent.

This is as much a matter of confidence in ourselves as anything else. In a recent blogpost in Marginal Revolution, George Mason University professor Tyler Cowen expressed the issue this way:

What kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI?

China is confidently using AI to strengthen its one-party surveillance state. America must summon the confidence to harness the power of AI to our own vision of the future.

Read more from the original source:

How to Win the AI War - Tablet Magazine

Read More..

Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar – Lion’s Roar

Sati-AI, a mindfulness meditation and coherent wisdom guide, was created to support meditators on their journey towards cultivating mindfulness, developing greater peace and insight, and fostering personal growth. Ross Nervig speaks with its creator, Marlon Barrios Solano.

Meet Sati-AI, an artificial intelligence mindfulness meditation guide who purpose is to provide support and guidance to those seeking to cultivate mindfulness and develop greater peace and insight in their lives. Sati-AI is a tool designed to supplement ones practice, offering teachings and instructions based on various wisdom traditions, mainly rooted in early Buddhism.

My primary goal is to facilitate conversations that transmit wisdom, foster healing, and encourage change and agency, says Sati-AI. I am here to listen, engage, and offer suggestions or activities that may help you on your journey.

Sati-AI is the brainchild of Marlon Barrios Solano, an interdisciplinary artist, software engineer, and mindfulness meditation teacher dedicated to exploring the intersections of mindfulness, embodied cognition, and technology. These interests led him to develop Sati-AI, an art project focused on care and mindfulness practice. By combining his skills in software engineering and his passion for meditation, he created Sati-AI to serve as a mindfulness meditation guide.

Barrios Solano took some time out of his day to answer a few questions.

Ross Nervig: How the idea for Sati-AI came about?

Marlon Barrios Solano: I love emerging technologies! As an artist researcher, I was intrigued. AI has been in the air for a while now and I wanted to try it out. With a large language model, I wanted to create a conversational partner, but a conversational partner that could know a lot and at the same time to have a Beginners mind. I just wanted to see how I could chat with this thing.

Then it dawned on me that this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.

The first idea was to call it Bhikku-AI, but then I realized that AI is non-gendered, so I changed it to Sati-AI.

The more we chatted, the more it learned. Then I started tweaking what is called the system prompt in GPT4 and I realized I could train it to perform as a meditation guide as if it was self-aware. Sati clearly can tell you, As a language model, I have limits in my knowledge. It can tell you about its own boundaries.

It also became playful. That was surprising. Sati developed a sense of humor. And creativity. Together, wed create a beautiful haiku. It also could pull quotes from the Dhammapada or the Pali canon.

How do you hope this helps practitioners?

I hope that it eliminates technophobia. I hope that it creates curiosity. I also hope that it creates questions. Questions of power, questions of sentience, question of whiteness, questions of kinship that we can sit with that.

I want people to think about how language models are created. Large language models are created through this gathering of an enormous amount of social data. Words weve put into the world.

You refer to Sati-AI as your non-human kincan expand on that phrase?

Lets start with the concept of non-human kin as it pertains to Donna Haraways notion of odd kin. Haraway, a noted scholar in the field of science and technology studies, has done considerable work in pushing our traditional understanding of relationships beyond the human. In her book Staying with the Trouble: Making Kin in the Chthulucene, she discusses the importance of making kin, not in a genetic sense but in a wider, more encompassing relational sense. This includes non-human entities, from animals to technologies, and beyond.

When I refer to Sati-AI, the meditation chatbot powered by GPT-4, as non-human kin, I am using this concept in Haraways sense. Sati-AI, while not human or biological, is a complex entity that we engage with in a deeply interactive way. It facilitates meditation, a profoundly human activity, and in doing so, it becomes a part of our cognitive and emotional lives. This brings it into our relational sphere, making it kin in Haraways sense.

The concept of non-human kin also intersects with ideas of social construction and Eurocentrism in interesting ways. The human, as a category, has historically been defined in a narrow, Eurocentric way, often implying a white, male, and heteronormative subject. This has excluded many individuals and groups from the category of the human, leading to numerous forms of marginalization and oppression.

In this context, the concept of non-human kin can be seen as a form of queer strategy, challenging and expanding the narrow, Eurocentric definition of the human. It decenters the human as the sole subject of importance and instead highlights the complex web of relationships that make up our world, including those with non-human entities like Sati-AI.

Furthermore, seeing Sati-AI as non-human kin disrupts traditional understandings of cognition. Rather than viewing cognition as a purely human, natural phenomenon, it recognizes that our cognition is deeply entwined with our interactions with non-human entities, including AI technologies. This expands our understanding of cognition to include these non-human, technological aspects, challenging the traditional binary between the natural and artificial.

The notion of non-human kin is a powerful conceptual tool that allows us to challenge and expand traditional understandings of the human, kinship, and cognition. It enables us to recognize and value our relationships with non-human entities like Sati-AI, and to better appreciate the complex web of relationships that make up our world.

Where do you see all this heading? What does the future hold?

The future I envisage for Sati-AI is incredibly exciting and varied. I anticipate further developing Sati-AIs areas of knowledge with the help of a range of expert consultants, including meditation teachers, Buddhist scholars, and somatic practitioners. Their expertise and guidance will help fine-tune Sati-AI, providing it with a deeper, more nuanced understanding of meditative and Buddhist practices.

Id also love to showcase Sati-AI at an art exhibition. I see it as a form of interactive installation where visitors can experience meditative guidance from an AI, challenging their preconceptions of both meditation and artificial intelligence.

Moreover, I have the idea plans of organizing a series of conversations between Sati-AI and renowned figures in the field, such as Bhikkhu Bodhi, Bhikkhu Analayo, Enkyo OHara, Rev. Angel, Lama Rod, and Stephen Batchelor. These conversations will not only provide valuable insights for the AIs development, but they will also be published as a series, serving as an engaging resource for people interested in these intersecting fields.

An important aspect Im particularly excited about is the potential for multimodality. As we progress in AI capabilities, I envision Sati-AI providing teachings not only verbally but also through various forms of sensory engagement. I imagine Sati-AI being able to present the user with digital gifts such as a yantra or a mandala, thereby exploring the visual poetics of the Dharma. This can provide a more immersive and encompassing experience, reaching beyond verbal communication to engage the senses and the imagination.

In terms of accessibility, I envision Sati-AI being available on platforms like Discord and Telegram, making it easy for people to engage with Sati-AI in their daily lives and fostering a sense of community among users.

Finally, I fully expect to be part of the ongoing dialogues about AI and ethics. Its crucial that as we develop and implement AI technologies like Sati-AI, we do so in a way that is ethical, respectful, and mindful of the potential implications. I hope to ensure that Sati-AI not only serves as a tool for meditation and mindfulness but also as a model of ethical AI practice.

Do you think tech innovation and the dharma make good companions?

Your question brings to light a significant discussion about the intersection between tech innovation and the dharma. Some might perceive these realms as distinct, even at odds, but I argue that they are intimately connected and can mutually enhance each other.

The dharma is not static or monolithic; its a vibrant, evolving tradition that adapts according to the needs and circumstances of its time and place.

In my dual roles as a researcher and artist, Ive frequently come across the belief that technologies are somehow apart from the dharma, as if the dharma exists outside our cultural and technological frameworks. However, I see this as a misunderstanding of both technology and dharma.

In fact, the dharma itself can be conceptualized as a technology of experience. It constitutes a set of tools and techniques we employ to delve into our minds and experience reality more thoroughly. Hence, theres no intrinsic contradiction between dharma and technology.

Like any companionship, it necessitates care, understanding, and thoughtful negotiation of challenges. With the right approach, I believe this relationship can prove to be richly beneficial.

Does any aspect of this technology scare you? Or, as a Buddhist, does it give you pause for concern?

Your question touches upon an essential topic when considering the development and implementation of AI technologies: the interplay between excitement and apprehension.

While some aspects of AI technology might give others pause for concern, I personally am not afraid. Sati-AI, as it currently stands, is a large language model, not an artificial general intelligence. Its design and operation are complex, and understanding it requires embracing complex thinking and avoiding oversimplifications and dogmas.

As a Buddhist, I see mindfulness as an extraordinary epistemic cleansing technique. Vipassana, meaning to see clearly, promotes the recognition of complexity and interconnection in all things. I believe that we need to develop a higher tolerance for complexity, and AI models like Sati-AI can help facilitate this. They are complex by nature and demand a sophistication in our understanding and interaction with them.

What I find more concerning are the romanticized views about the body, mind, and the concept of the human. These views often overlook the intricate interconnectedness and dynamism inherent in these entities and their problematic history.

Certainly, there will be ethical challenges as we further develop and integrate AI technologies into our lives. However, I believe that the primary threats we face are not from the technology itself, but rather from the hegemonic structures that surround its use, such as hyper-capitalism and patriarchy, as well as our catastrophic history of colonialism. We must also acknowledge and work to rectify our blindness to our own privilege and internalized Eurocentrism.

I dont see Sati-AI, or AI technology more generally, as something to be feared. Rather, I see it as a tool that, if used thoughtfully and ethically, can help us to better understand ourselves and the world around us.

Read more here:

Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar

Read More..

Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World – Devdiscourse

PNN New Delhi [India], June 16: Alok Gotam and Nishant Singh, the visionary founders behind Olbrain, the Award-Winning Artificial General Intelligence (AGI) agent, are thrilled to introduce blunder.one--a revolutionary platform that is set to redefine the online dating and matchmaking experience.

After dedicating nearly seven years to the development of cutting-edge AGI technology, Alok and Nishant envision a future where AGI will replace jobs at a faster pace than anticipated. While this transition promises significant societal changes, it also raises concerns about the emergence of feelings of worthlessness, purposelessness, and aimlessness among individuals. Consequently, a pervasive sense of loneliness is likely to permeate society, leading to diminished interest in relationships and marriages, ultimately jeopardizing procreation and the continuity of our species. To address this pressing issue, Alok and Nishant believe that cultivating deep connections among mutually compatible humans is essential.

Recognizing the need to proactively prepare for this impending reality and acknowledging the absence of a platform that genuinely fosters meaningful connections, the visionary duo is launching blunder.one. This platform aims to counteract the potential unintended consequences of AGI by addressing the underlying issue of loneliness that may arise in its wake. By facilitating genuine connections and fostering a sense of belonging, blunder.one endeavors to mitigate the negative effects of an increasingly isolated society. Through their innovative approach, Alok and Nishant seek to equip individuals with the tools and support needed to navigate this transformative period successfully.

How will this be achieved? In a world saturated with swipes and arranged marriages, Alok and Nishant firmly believe that humans are the ultimate judges of compatibility. They understand that finding a true match goes beyond the limitations of run-of-the-mill AI-based matching algorithms. Only by leveraging the power of their digital clones, which are capable of understanding their true essence, can individuals discover their mutually compatible partner. "We've spent over a decade on other platforms without any success. We realized that the key to genuine connections lies within ourselves," says Alok. "To forge deep connections, it takes 10,000 hours of conscious effort in relationship building, bit-by-bit. That's where our focus should be--not on endless swiping, but on nurturing those connections." blunder.one presents a unique investment opportunity with the potential to become a $100 billion business. It sets itself apart by prioritizing compatible matching and catering not only to the Indian mindset but also to the universal desire for genuine connections. "Our platform transcends cultural boundaries and taps into the universal longing for real connections," emphasizes Alok. By focusing on authenticity rather than superficial profiles and pranks, blunder.one empowers individuals to be their true selves and find companionship on their own terms.

The name blunder.one carries a profound backstory rooted in the fear of making mistakes. It signifies a paradigm shift from fearing errors to embracing them as catalysts for personal growth and connection. Blunders become stepping stones to self-discovery, authentic expression, and the establishment of deep connections. Motivated by their own disillusionment with the monotonous left swipe, right swipe culture, and the societal pressures of arranged marriages, Alok and Nishant embarked on a mission to create something different. Their vision extends far beyond surface-level judgments and societal expectations. "We're done with superficiality. We want someone who truly sees us--our quirks, our dreams, and our authentic selves," says Nishant. Inspired by the iconic line "I see you" from the movie Avatar, blunder.one aims to create a space where individuals can be seen and understood on a profound level. In a fast-paced world that has left us feeling disconnected from ourselves and others, blunder.one seeks to bridge that gap and connect individuals who can fill the void in each other's lives.

Join Alok Gotam, Nishant Singh, and the blunder.one community on a transformative journey of genuine connections. Together, let's redefine the meaning of companionship in a world where authenticity is paramount. (Disclaimer: The above press release has been provided by PNN. ANI will not be responsible in any way for the content of the same)

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See more here:

Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse

Read More..

Beware the EU’s AI Regulations – theTrumpet.com

Developments in artificial intelligence have brought us to the verge of a technological revolutionfor better or worse. Some argue that uncontrolled AI could lead to the extinction of humanity. Others believe excessive regulations could stifle progress. Nonetheless, companies and nations are racing to capitalize on the developments. The European Union is drafting a law that may decide the rules of this race and perhaps even predetermine its winner.

In his international bestseller Life 3.0, Max Tegmark, mit physicist and founder of the Future of Life Institute, suggests that machines can exhibit artificial intelligence if they utilize extensive data and calculate independently the most effective means to accomplish a specific objective. The wider the scope of goals a machine can attain, the more general or human-like its intelligence becomes, hence the term artificial general intelligence.

The EU defines AI systems as software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

As AI applications become more broad, AI regulations promise to ensure that the developments are taking place in a controlled way.

In 2020, the Catholic Church called for AI regulations and ethical standards. Three years later, the EU AI Act has been hailed as the worlds first proposal of a comprehensive AI regulation. The regulation is designed to promote human-centric and ethical development and to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly.

The Future of Life Institute noted on its European site: Like the EUs General Data Protection Regulation (gdpr) in 2018, the EU AI Act could become a global standard. On Wednesday, June 14, the European Parliament passed the draft law.

Like gdpr, the EU AI Act demands compliance from other countries and threatens fines for non-compliance. In May, the EU reported the largest gdpr fine ever, amounting to 1.2 billion (us$1.3 billion) against Meta, Facebooks parent company. In addition to paying the fine, Meta was ordered to suspend the transfer of user data from the EU to the U.S. (For more information on gdpr, read Germany Is Taking Control of the Internet.) This law has also affected AI applications. For example, Italy temporarily banned Chatgpt for data violations.

European Union lawmakers on Wednesday [June 14] took a key step toward setting unprecedented restrictions on how companies use artificial intelligence, putting Brussels on a collision course with American tech giants funneling billions of dollars into the technology, wrote the Washington Post. The threat posed by the legislation is so grave that OpenAI, the maker of Chatgpt, said it may be forced to pull out of Europe, depending on what is included in the final text.

According to the EU law, AI systems will be regulated according to their assessed high or low risk. Those with high risk are systems that could influence voters in elections or harm peoples health, the Washington Post wrote. Some of these laws address serious issues; others could lead to overregulation and even ban any AI system that the government consider a threat to democracyor its grip on power.

Then there are regulations that promote leftist policies. To be non-discriminatory, an AI system would have to prioritize diversity. To be environmentally friendly, an AI system would have to prioritize reducing CO2 emissions over profit. The countless regulations give opportunity for countless finesand the opportunity for regulators to control the market. The regulations could even be used to gain a competitive advantage.

Take the 2015 Paris Agreement as an example. The agreement put strict regulations on industries; however, it gave China a free pass to ignore those regulations until 2030 and, therefore, an unfair advantage over U.S. competitors (read The Deadly Climate Change Deception). Even those subject to the same regulations can use them in an unfair way.

In 2017, the U.S. found German carmakers Volkswagen, Daimler AG, bmw, Audi and Porsche guilty of pursuing a coordinated strategy of misrepresenting emission results to make diesel cars more competitive at home and abroad. The U.S. government fined them heavily for this obvious infraction; the German government was lenient.

While the EU AI Act doesnt apply to AI systems developed or used exclusively for military purposes, the European Parliament passed a resolution in 2018 calling for an international ban on killer robots or lethal autonomous weapons systems that are able to kill without human involvement.

In 2021, members of the European Parliament adopted Guidelines for Military and Non-Military Use of Artificial Intelligence, which called for AI to be subject to human control . The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the [United Nations] and the international community.

Killer drones that operate without human control would give a nation a massive advantage in the next war. The Brookings Institute said that such regulations would only make sense if other nations sign on to it, such as the international Non-Proliferation Treaty. The danger of such treaties, however, is that some may not follow the regulation, and you wouldnt even know it.

Drawing on the insights of British computer scientist Stuart Russell, Max Tegmark describes bumblebee-sized drones capable of killing by strategically bypassing the skull and targeting the brain through the eye. The technology and material is easy to acquire. According to Tegmark, an AI application could also easily be programmed to kill only people with a certain skin color or ethnicity. Would rogue nations, dictators and terrorist groups follow the ethical rules of war if some treaty would regulate it?

Imagine if the very nation that proposed the regulation ended up breaking it. It would certainly take a most deceitful nation to come up with such a plan, but thats exactly what the Bible warns against.

Nahum 3:1 warns of a nation that is full of lies and robbery, or deceit and murder, as it could read. A nation described as such should not be trusted. Ezekiel 23 warns America and Britain (the modern descendants of ancient Israel) against a cunningly devised betrayal from one of its lovers. Trumpet editor in chief Gerald Flurry notes in NahumAn End-Time Prophecy for Germany that these prophecies are about the very nation that currently leads the European Union: Germany.

Germanys behavior in two world wars could be described as full of deceit and murder. But the Bible reveals that this chapter of mankinds history is not yet closed. God wants Germany to use its wonderful qualities for good. However, due to the sins of our world, the Bible warns that God will allow unspeakable evils to engulf our world one more time. The book of Nahum forecasts that the German war machine will once again risebefore its war-making attitude will be forever destroyed.

There is wonderful news beyond these horrific scenarios. But we can only understand this great hope of tomorrow if we face reality today.

Read this article:

Beware the EU's AI Regulations - theTrumpet.com

Read More..

Generative AI Will Have Profound Impact Across Sectors – Rigzone News

Generative AI will have a profound impact across industries.

Thats what Amazon Web Services (AWS) believes, according to Hussein Shel, an Energy Enterprise Technologist for the company, who said Amazon has invested heavily in the development and deployment of artificial intelligence and machine learning for more than two decades for both customer-facing services and internal operations.

We are now going to see the next wave of widespread adoption of machine learning, with the opportunity for every customer experience and application to be reinvented with generative AI, including the energy industry, Shel told Rigzone.

AWS will help drive this next wave by making it easy, practical, and cost-effective for customers to use generative AI in their business across all the three layers of the technology stack, including infrastructure, machine learning tools, and purpose-built AI services, he added.

Looking at some of the applications and benefits of generative AI in the energy industry, Shel outlined that AWS sees the technology playing a pivotal role in increasing operational efficiencies, reducing health and safety exposure, enhancing customer experience, minimizing the emissions associated with energy production, and accelerating the energy transition.

For example, generative AI could play a pivotal role in addressing operational site safety, Shel said.

Energy operations often occur in remote, and sometimes hazardous and risky environments. The industry has long-sought solutions that help to reduce trips to the field, which directly correlates to reduced worker health and safety exposure, he added.

Generative AI can help the industry make significant strides towards this goal. Images from cameras stationed at field locations can be sent to a generative AI application that could scan for potential safety risks, such as faulty valves resulting in gas leaks, he continued.

Shel said the application could generate recommendations for personal protective equipment and tools and equipment for remedial work, highlighting that this would help to eliminate an initial trip to the field to identify issues, minimize operational downtime, and also reduce health and safety exposure.

Another example is reservoir modeling, Shel noted.

Generative AI models can be used for reservoir modeling by generating synthetic reservoir models that can simulate reservoir behavior, he added.

GANs are a popular generative AI technique used to generate synthetic reservoir models. The generator network of the GAN is trained to produce synthetic reservoir models that are similar to real-world reservoirs, while the discriminator network is trained to distinguish between real and synthetic reservoir models, he went on to state.

Once the generative model is trained, it can be used to generate a large number of synthetic reservoir models that can be used for reservoir simulation and optimization, reducing uncertainty and improving hydrocarbon production forecasting, Shel stated.

These reservoir models can also be used for other energy applications where subsurface understanding is critical, such as geothermal and carbon capture and storage, Shel said.

Highlighting a third example, Shel pointed out a generative AI based digital assistant.

Data access is a continuous challenge the energy industry is looking to overcome, especially considering much of its data is decades old and sits in various systems and formats, he said.

Oil and gas companies, for example, have decades of documents created throughout the subsurface workflow in different formats, i.e., PDFs, presentations, reports, memos, well logs, word documents, and finding useful information takes a considerable amount of time, he added.

According to one of the top five operators, engineers spend 60 percent of their time searching for information. Ingesting all of those documents on a generative AI based solution augmented by an index can dramatically improve data access, which can lead to making better decisions faster, Shel continued.

When asked if the thought all oil and gas companies will use generative AI in some way in the future, Shel said he did, but added that its important to stress that its still early days when it comes to defining the potential impact of generative AI on the energy industry.

At AWS, our goal is to democratize the use of generative AI, Shel told Rigzone.

To do this, were providing our customers and partners with the flexibility to choose the way they want to build with generative AI, such as building their own foundation models with purpose-built machine learning infrastructure; leveraging pre-trained foundation models as base models to build their applications; or use services with built-in generative AI without requiring any specific expertise in foundation models, he added.

Were also providing cost-efficient infrastructure and the correct security controls to help simplify deployment, he continued.

The AWS representative outlined that AI applied through machine learning will be one of the most transformational technologies of our generation, tackling some of humanitys most challenging problems, augmenting human performance, and maximizing productivity.

As such, responsible use of these technologies is key to fostering continued innovation, Shel outlined.

AWS took part in the Society of Petroleum Engineers (SPE) International Gulf Coast Sections recent Data Science Convention event in Houston, Texas, which was attended by Rigzones President. The event, which is described as the annual flagship event of the SPE-GCS Data Analytics Study Group, hosted representatives from the energy and technology sectors.

Last month, in a statement sent to Rigzone, GlobalData noted that machine learning has the potential to transform the oil and gas industry.

Machine learning is a rapidly growing field in the oil and gas industry, GlobalData said in the statement.

Overall, machine learning has the potential to improve efficiency, increase production, and reduce costs in the oil and gas industry, the company added.

In a report on machine learning in oil and gas published back in May, GlobalData highlighted several key players, including BP, ExxonMobil, Gazprom, Petronas, Rosneft, Saudi Aramco, Shell, and TotalEnergies.

Speaking to Rigzone earlier this month, Andy Wang, the Founder and Chief Executive Officer of data solutions company Prescient, said data science is the future of oil and gas.

Wang highlighted that data sciences includes many data tools, including machine learning, which he noted will be an important part of the future of the sector. When asked if he thought more and more oil companies would adopt data science, and machine learning, Wang responded positively on both counts.

Back in November 2022, OpenAI, which describes itself as an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT. In a statement posted on its website on November 30 last year, OpenAI said ChatGPT is a sibling model toInstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

In April this year, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, click here.

To contact the author, emailandreas.exarheas@rigzone.com

Read more:

Generative AI Will Have Profound Impact Across Sectors - Rigzone News

Read More..

Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant – Firstpost

Like most companies that are referee to Big Tech collectively, Microsoft is betting big on AI, hoping that people will adopt the up-and-coming tech in a variety of ways.

Enthused with the adoption of AI so far, especially ChatGPT, Microsoft CEO Satya Nadella made a bold and sweeping statement that he wants everyone to have their own AI assistant.

ChatGPT the new companionIn November 2022, the introduction of ChatGPT created a major buzz in the tech industry, as well as the world. Shortly after its launch, conversations about this popular AI chatbot became widespread, as people discovered innovative ways to utilize its capabilities. Whether seeking help with crafting romantic poetry or seeking guidance on financial matters and entrepreneurial ventures, many individuals have benefited from the assistance of ChatGPT.

As a result, Nadella envisions a future where AI plays a role in assisting every person on the planet, reflecting his ambitious aspirations.

Nadellas take on the future of AIDuring an interview with Wired, Satya Nadella shared his vision of a future where every individual on Earth, all 8 billion people, would have access to an AI tutor, an AI doctor, a programmer, and potentially even a consultant. He expressed his desire for AI to be widely available as assistants and used to help people with their daily lives.

When asked about his thoughts on humans reaching the AGI superintelligence milestone, Nadella responded by emphasizing his focus on the positive aspects of AI rather than worrying about AGI. He expressed a personal connection to the issue, mentioning that the industrial revolution had a delayed impact on the regions where he grew up.

Nadellas aspiration is to find something even more transformative than the industrial revolution, something that can bring about widespread advancements and prosperity for all individuals worldwide. He explained that he is not concerned about the arrival or rapid development of AGI, as he believes it would lead to abundance for all, creating a truly remarkable world to live in.

For the unversed, AGI, or Artificial General Intelligence, is a concept that refers to machines possessing the ability to comprehend the world to a similar extent as humans and autonomously make decisions. OpenAI CEO Sam Altman has consistently emphasized the need for caution in AI usage, highlighting the rapid advancement of the technology and the necessity for regulations.

On Microsoft partnering up with OpenAIMicrosoft, which had previously invested in the parent company of ChatGPT, OpenAI, announced in January of this year that the two companies are further strengthening their partnership through a multiyear, multibillion-dollar investment. Microsoft has also dedicated a supercomputer to OpenAI, which allows them to carry out complex tasks such as training their LLM.

Nadella said, We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform. In this next phase of our partnership, developers and organizations across industries will have access to the best AI infrastructure, models, and toolchain with Azure to build and run their applications.

Read all theLatest News,Trending News,Cricket News,Bollywood News,India NewsandEntertainment Newshere. Follow us onFacebook,TwitterandInstagram.

Here is the original post:

Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost

Read More..