Page 210«..1020..209210211212..220230..»

Plotting Golf Courses in R with Google Earth – Towards Data Science

Once weve finished mapping our hole or course, it is time to export all that hard work into a KML file. This can be done by clicking the three vertical dots on the left side of the screen where your project resides. This project works best with geoJSON data, which we can easily convert our KML file to in the next steps. Now were ready to head to R.

The packages we will need to prepare us for plotting are: sf (for working with geospatial data), tidyverse (for data cleaning and plotting), stringr (for string matching), and geojsonsf (for converting from KML to geoJSON). Our first step is reading in the KML file, which can be done with the st_read() function from sf.

Great! Now we should have our golf course KML data in R. The data frame should have 2 columns: Name (project name, or course name in our case), and geometry (a list of all individual points comprising the polygons we traced). As briefly mentioned earlier, lets convert our KML data to geoJSON and also extract the course name and hole numbers.

To get our maps to point due north we need to project them in a way that preserves direction. We can do this with the st_transform() function.

Were almost ready to plot, but first, we need to tell ggplot2 how each polygon should be colored. Below is the color palette my project is using, but feel free to customize as you wish.

Optional: in this step we can also calculate the centroids of our polygons with the st_centroid() function so we can overlay the hole number onto each green.

Were officially ready to plot. We can use a combination of geom_sf(), geom_text(), and even geom_point() if we want to get fancy and plot shots on top of our map. I typically remove gridlines, axis labels, and the legend for a cleaner look.

And there you have it a golf course plotted in R, what a concept!

To view other courses I have plotted at the time of writing this article, you can visit my Shiny app: https://abodesy14.shinyapps.io/golfMapsR/

If you followed along, had fun in doing so, or are intrigued, feel free to try mapping your favorite courses and create a Pull Request for the golfMapsR repository that I maintain: https://github.com/abodesy14/golfMapsR With some combined effort, we can create a nice little database of plottable golf courses around the world!

Read the original post:

Plotting Golf Courses in R with Google Earth - Towards Data Science

Read More..

UNE announces new School of Computer Science and Data Analytics – University of New England

Algorithms that process complex financial data. Sensors that track and monitor endangered species. Systems that track patient health records across hospitals and the cybersecurity tools to keep them secure. Computing and data now touch nearly every facet of daily life and the worlds industries, a trend that only continues to grow.

With these rapid technological advancements, the University of New England has announced the formation of a School of Computer Science and Data Analytics offering a diverse range of programs aimed at equipping students with the essential knowledge and skills needed to thrive in today's rapidly advancing technological landscape.

Embedded within UNEs College of Arts and Sciences, the new school reflects UNEs commitment to meeting the rising demand for professionals with skills in emerging technologies,such as artificial intelligence, cybersecurity, and data analytics. UNE has hired Sylvain Jaume, Ph.D., a leading artificial intelligence expert and founder of one of the nations first data science degree programs, as the schools inaugural director.

The school will comprise UNEs existing majors in Applied Mathematics and Data Science, plus two new majors in Computer Science and Statistics, designed to ensure that the regional and national workforce is well-equipped to navigate and contribute to ongoing advancements in each industry.

As we embark on this new venture, we are mindful of the critical role our graduates will play in shaping the future, and specializations in computer science and data science are increasingly sought after in today's job market, remarked Gwendolyn Mahon, UNEs provost and senior vice president of Academic Affairs. The launch of this school aligns with UNEs mission to empower our graduates with the expertise required to drive innovation and address the worlds complex challenges.

According to the U.S. Bureau of Labor Statistics, employment of computer scientists isprojected to grow 23% through 2032, which is much faster than the average for all occupations (3%). A recent study by labor analytics firm Lightcast reported a total of 807,000 positions seeking qualified computer science graduates were posted in 2022 alone.

The Computer Science major at UNE will build on the institutions leading reputation in interdisciplinary learning, fostering connections across the Universitys diverse academic and professional disciplines including the health sciences, biology, marine science, and business and setting students up for varied academic and research opportunities.

New courses in computer architecture, software engineering, and computational theory will prepare students for necessary jobs across a spectrum of fields including health care, financial services, biotechnology, and cybersecurity. Students will additionally gain hands-on experience through internships, providing them with real-world insights into these growing fields as well as valuable networking opportunities.

Enrollment for the new majors will begin in fall 2025.

This transition reflects the agile nature of UNE to rethink how we educate our students to break new ground in Maine and our nations most sought-after industries, remarked Jonathan Millen, Ph.D., dean of the College of Arts and Sciences. These new programs exemplify UNE's dedication to innovation, excellence, and preparing future leaders to tackle the challenges of tomorrow.

See original here:

UNE announces new School of Computer Science and Data Analytics - University of New England

Read More..

Dissolving map boundaries in QGIS and Python | by Himalaya Bir Shrestha | May, 2024 – Towards Data Science

In an empty QGIS project, by typing world in the coordinate space in the bottom of the page, I could call an in-built map of the world with administrative boundaries of all the countries as shown below.

Next, by using the select feature, I selected the 8 countries of South Asia as highlighted in the map below. QGIS offers the option to select countries by hand, by polygon, by radius, and by individually selecting or deselecting countries with a mouse click.

Clipping these countries off of the world map is straightforward in QGIS. One needs to go to Vector in the menu-> Select Geoprocessing tools -> Select Clip. In the options, I ticked on the check box for the Selected features only in the Input layer and ran the process.

The clipping action was completed in 7.24 seconds alone and I got a new layer called Clipped. This is depicted by the brown color in the screenshot below. By going to Properties of the layer, one can use different coloring options in QGIS in the Symbology option.

Next, I wanted to dissolve the boundaries between countries in South Asia. For this, I selected all the countries in South Asia. I went to the Vector Menu -> Select Geoprocessing Tools ->Dissolve. Similar to the previous step, I selected Selected featured only in the input layer and ran the algorithm which took just 0.08 seconds. A new layer called Dissolved was created where the administrative boundaries between countries were dissolved and appeared as a single unit as shown below:

Visualizing both the world layer and Dissolved layer at the same time looks as shown below:

In this section, I am going to demonstrate how I could the same objective in Python using the geopandas package.

In the first step, I read the in-built dataset of the world map within the geopandas package. It contains the vector data of the world with the administative boundaries of all counntries. This is obtained from the Natural Earth dataset, which is free to use.

In my very first post, I demonstrated how it is possible to clip off a custom Polygon geometry as a mask from the original geopandas dataframe or layer. However, for simplicity, I just used the filter options to obtain the required layers for Asia and South Asia.

To filter the South Asia region, I used a list containing the name of each country as a reference.

To dissolve the boundaries between countries in South Asia, I used the dissolve feature in geopandas. I passed None as an argument, and specified parameters to apply certain aggregate functions, in which the population and GDP in the resulting dissolved dataframe would sum up the population and GDP in all countries in South Asia. I am yet to figure out how the aggregate function can also be applied in QGIS.

Dissolving boundaries between countries within a continent in the world

Using the same procedure as above, I wanted to dissolve the boundaries between countries within a continent and show different continents distinct from each other in a world map based on the number of countries in each continent.

For this purpose, first I added a new column called num_countries in the world geodataframe containing 1 as a value. Then I dissolved the world map using the continent column as a reference.

I used the aggregate function to sum up the population and GDP in all countries in the continent and count the number of countries in each continent. The resulting geodataframe continents_dissolved look as shown:

We see that Asia has the largest population and GDP of all continents. Similarly, we see that Africa has the most countries (51) followed by Asia (47), Europe (39), North America (18), South America (13), and Oceania (7). Antarctica and Seven seas (open ocean) are also regarded as continents in this dataset.

Finally, I wanted to plot the world map highlighting the number of countries in each continent with the help of a color map. I achieved this using the following code:

The resulting map appears as shown below:

In this post, I described ways to dissolve map boundaries using QGIS and geopandas in Python. In the process, I also explained the clipping process and the possibility of using aggregate function while dissolving the map boundaries in geopandas. These processes could be very useful for the manipulation, processing, and transformation of geographical maps in the form of vector datasets. The code and the QGIS project file for this post are available in this GitHub repository. Thank you for reading!

See the article here:

Dissolving map boundaries in QGIS and Python | by Himalaya Bir Shrestha | May, 2024 - Towards Data Science

Read More..

5WPR Technology PR Division Named Among Top in the US – CIOReview

New York, NY - O'Dwyer's, a leading public relations industry publication, has announced its annual PR rankings, naming 5WPRs Technology PR Division the 13thlargest in the US. With net fees over $15 million, the agencys technology practice remains in the top 15 rankings for over 5 years running.

For the last 55 years, O'Dwyer's has been ranking PR agencies based on their fees and has verified by reviewing PR firm income statements.

5Ws technology client partners span the globe and every sector of the space, from adtech and fintech, to artificial intelligence and cybersecurity said Matt Caiola, Co-CEO, 5WPR. It is a fast-changing industry that requires a high level of skill to navigate. The dedication of our team makes all the difference and ensures results-driven work that makes a noticeable difference in our clients brand identity.

Notable clients of the practice include home automation company Samsung SmartThings, legal AI company Casetext, multinational payment and transactional services platform Worldline, data-driven marketing solution Zeta Global, leader in AI-driven narrative and risk intelligence Blackbird.AI, trading software Webull, and the number one enterprise experience platform for critical insights and action, Medallia.

In addition to this recognition, 5WPR has also been named a top-twoNew York City PR agency, and a top US agency by ODwyers this year.

See original here:

5WPR Technology PR Division Named Among Top in the US - CIOReview

Read More..

Ways to think about AGI Benedict Evans – Benedict Evans

In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)

For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.

Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.

(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)

However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.

More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.

They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.

And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.

Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.

As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.

Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.

This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.

Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:

I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.

Right ho, Jeeves, PG Wodehouse, 1934

What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)

By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.

Read this article:

Ways to think about AGI Benedict Evans - Benedict Evans

Read More..

What’s the future of AI? – McKinsey

May 5, 2024Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. To outcompete in the future, organizations and individuals alike need to get familiar fast. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role. This series of McKinsey Explainers, which draws on insights from articles by McKinseys Eric Lamarre, Rodney W. Zemmel, Kate Smaje, Michael Chui, Ida Kristensen, and others, dives deep into the seven technologies that are already shaping the years to come.

Whats the future of AI?

What is AI (artificial intelligence)?

What is generative AI?

What is artificial general intelligence (AGI)?

What is deep learning?

What is prompt engineering?

What is machine learning?

What is tokenization?

See original here:

What's the future of AI? - McKinsey

Read More..

‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com

Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.

Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.

But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.

The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.

In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.

In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.

Related: 3 scary breakthroughs AI will make in 2024

Get the worlds most fascinating discoveries delivered straight to your inbox.

As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.

Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).

Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.

Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.

Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.

Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.

From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.

Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.

Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.

Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.

Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'

The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?

Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.

Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.

We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.

This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.

Visit link:

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com

Read More..

Impact of AI felt throughout five-day event – China Daily

Tech companies' executives share insights into the latest technological issues at the Future Artificial Intelligence Pioneer Forum, a key part of the AI Day.

As artificial intelligence has sparked technological revolution and industrial transformation, its influence was pervasive throughout the 2024 Zhongguancun Forum, which concluded in Beijing on April 29.

A highlight of the five-day event, also known as the ZGC Forum, was AI Day, the first in the annual forum's history.

On April 27, a series of the latest innovation achievements and policies were released, underlining the host city's prominence in the AI research and industry landscape.

One of the technological achievements, a virtual girl named Tong Tong, developed by the Beijing Institute for General Artificial Intelligence grabbed attention.

Driven by values and causality, the avatar based on artificial general intelligence has a distinctive "mind "that sets it apart from data-driven AI. It can make decisions based on its own "values" rather than simply executing preset programs.

The development of Tong Tong circumvents the reliance of current data-driven AI on massive computing power and large-scale data. Its daily training uses no more than 10 A100 chips, indicating that it does not require massive computing resources and huge amounts of data for independent learning and growth.

At the same time, Tong Tong has acquired intelligent generalization capabilities, making it a versatile foundation for various vertical application scenarios.

"If the Tong Tong's 'fullness' is decreased, she will find food herself, and if 'tidiness' is increased, she will also pick up bottles from the ground," said a BIGAI staff member. By randomly altering Tong Tong's inclinations such as curiosity, tidiness and cleanliness, the avatar can autonomously explore the environment, tidy up rooms and wipe off stains.

Researchers said Tong Tong possesses a complete mind and value system similar to that of a 3 or 4-year-old child and is currently undergoing rapid replication.

"The birth of Tong Tong represents the rise of our country's independent research capabilities. It has shifted from the initial data-driven approach to a value-driven one, which has deeply promoted the emergence of technological paradigms and has had a significant effect on our scenarios, industries and economy," BIGAI Executive Deputy Director Dong Le said.

The goal of general AI research is to seek a unified theoretical framework to explain various intelligent phenomena; to develop a general intelligence entity with autonomous capabilities in perception, cognition, decision-making, learning, execution, social collaboration and others; all while aligning with human emotions, ethics and moral concepts, said BIGAI Director Zhu Songchun.

Also among the tech presentations was the text-to-video large model, Vidu, from Tsinghua University in collaboration with Chinese AI company Shengshu Technology.

It is reportedly China's first inaugural video large model with extended duration, exceptional consistency and dynamic capabilities, with its comprehensive performance in line with top international standards and undergoing accelerated iterative improvements.

"Vidu is the latest achievement in full-stack independent innovation, achieving technological breakthroughs in multiple dimensions, such as simulating the real physical world; possessing imagination; understanding multicamera languages; generating videos of up to 16 seconds with a single click; ensuring highly consistent character-scene timing and understanding Chinese elements," said Zhu Jun, vice-dean of the Institute for Artificial Intelligence at Tsinghua University and chief scientist of Shengshu Technology.

Such leading-edge technologies are examples of Beijing's AI research, which provides a foundation for the sustainable growth of related industries.

The city has released a batch of policies to encourage the development of the AI industry.

The policies are aimed at enhancing the supply of intelligent computing power; strengthening industrial basic research; promoting the accumulation of data; accelerating the innovative application of large models and creating a first-class development environment, Lin Jinhua, deputy director of the Beijing Commission of Development and Reform, said at the Future AI Pioneer Forum, part of the AI Day.

Beijing will pour more than 100 billion yuan ($13.8 billion) in optimizing its business and financing environment in the next five years and award AI breakthrough projects that have been included in major national strategic tasks up to 100 million yuan, according to Lin.

An international AI innovation zone is planned for the city's Haidian district, said Yue Li, executive deputy head of the district.

The zone will leverage research and industrial resources in the district including 52 key national laboratories; 106 national-level research institutions; 37 top-tier universities, including Peking University and Tsinghua University; 89 top global AI scholars and 1,300 AI businesses to create a new innovation ecosystem paradigm, Yue said.

Follow this link:

Impact of AI felt throughout five-day event - China Daily

Read More..

More details of the AI upgrades heading to iOS 18 have leaked – TechRadar

Artificial intelligence is clearly going to feature heavily in iOS 18 and all the other software updates Apple is due to tell us about on June 10, and new leaks reveal more about what's coming in terms of AI later in the year.

These leaks come courtesy of "people familiar with the software" speaking to AppleInsider, and focus on the generative AI capabilities of the Ajax Large Language Model (LLM) that we've been hearing about since last year.

AI-powered text summarization covering everything from websites to messages will apparently be one of the big new features. We'd previously heard this was coming to Safari, but AppleInsider says this functionality will be available through Siri too.

The idea is you'll be able to get the key points out of a document, a webpage, or a conversation thread without having to read through it in its entirety and presumably Apple is going to offer certain assurances about accuracy and reliability.

Ajax will be able to generate responses to some prompts entirely on Apple devices, without sending anything to the cloud, the report says and that chimes with previous rumors about everything running locally.

That's good for privacy, and for speed: according to AppleInsider, responses can come back in milliseconds. Tight integration with other Apple apps, including the Contacts app and the Calendar app, is also said to be present.

AppleInsider mentions that privacy warnings will be shown whenever Ajax needs information from another app. If a response from a cloud-based AI is required, it's rumored that Apple may enlist the help of Google Gemini or OpenAI's ChatGPT.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

Spotlight on macOS will be getting "more intelligent results and sorting" too, AppleInsider says, and it sounds like most of the apps on iOS and macOS will be getting an AI boost. Expect to hear everything Apple has been working on at WWDC 2024 in June.

Visit link:

More details of the AI upgrades heading to iOS 18 have leaked - TechRadar

Read More..

US Air Force Secretary Kendall flies in cockpit of plane controlled by AI – Fox News

U.S. Air Force Secretary Frank Kendall rode in the cockpit of a fighter jet on Friday, which flew over the desert in California and was controlled by artificial intelligence.

Last month, Kendall announced his plans to fly in an AI-controlled F-16 to the U.S. Senate Appropriations Committees defense panel, while speaking about the future of air warfare being dependent on autonomously operated drones.

On Friday, the senior Air Force leader followed through with his plans, making what could be one of the biggest advances in military aviation since stealth planes were introduced in the early 1990s.

Kendall flew to Edwards Air Force Base the same desert facility where Chuck Yeager broke the sound barrier to watch and experience AI flight in real time.

US MILITARY OUT OF TIME IN PUSH AGAINST ADVERSARIES' MODERNIZATION, AIR FORCE SECRETARY SAYS

The X-62A VISTA aircraft, an experimental AI-enabled Air Force F-16 fighter jet, takes off on Thursday, May 2, 2024, at Edwards Air Force Base, Calif. The flight, with Air Force Secretary Frank Kendall riding in the front seat, is serving as a public statement of confidence in the future role of AI in air combat. The military is planning to use the technology to operate an unmanned fleet of 1,000 aircraft. (AP Photo/Damian Dovarganes)

After the flight, Kendall spoke with the Associated Press about the technology and the role it will play in air combat.

"Its a security risk not to have it. At this point, we have to have it," the secretary said.

The Associated Press and NBC were granted permission to watch the secret flight with the agreement that neither would report on the matter until the flight was complete, due to security concerns.

AIR FORCE SECRETARY PLANS TO RIDE IN AI-OPERATED F-16 FIGHTER AIRCRAFT THIS SPRING

Air Force Secretary Frank Kendall sits in the front cockpit of an X-62A VISTA aircraft at Edwards Air Force Base, Calif., on Thursday, May 2, 2024. The flight on the Artificial Intelligence-controlled modified F-16, is serving as a public statement of confidence in the future role of AI in air combat. The military is planning to use the technology to operate an unmanned fleet of 1,000 aircraft. Arms control experts and humanitarian groups are concerned that AI might one day be able to take lives autonomously and are seeking greater restrictions on its use (AP Photo/Damian Dovarganes)

The F-16 controlled by AI is called Vista, and it flew Kendall in maneuvers reaching over 550 mph, putting pressure on his body of nearly five times the force of gravity.

Flying alongside Vista and Kendall was a human-piloted F-16, and the two jets raced within 1,000 feet of each other performing twists and loops, in an effort to force their opponent into a place of submission.

Kendall grinned as he climbed out of the cockpit after the hour-long flight, saying he saw enough to trust the AI technology in deciding whether to fire weapons during a war.

PENTAGON SEEKS LOW-COST AI DRONES TO BOLSTER AIR FORCE: HERE ARE THE COMPANIES COMPETING FOR THE OPPORTUNITY

This image from remote video released by the U.S. Air Force shows Air Force Secretary Frank Kendall during his experimental flight inside the cockpit of a X-62A VISTA aircraft autonomous warplane above Edwards Air Base, Calif, on Thursday, May 2, 2024. The AI-controlled flight is serving as a public statement of confidence in the future role of AI in air combat. (AP Photo/Damian Dovarganes)

Many oppose the idea of computers making that decision, fearing AI may one day be able to drop bombs on people without consulting with humans.

The same people who oppose AI-powered war machines are also seeking greater restrictions on its use.

One of the groups seeking stronger restrictions is the International Committee of the Red Cross.

"There are widespread and serious concerns about ceding life-and-death decisions to sensors and software," the group warned, adding the autonomous weapons "are an immediate cause of concern and demand an urgent, international political response."

EUROPE SEEKS TO BECOME GLOBAL REFERENCE POINT WITH AI OFFICE

An AI-enabled Air Force F-16 fighter jet, left, flies next to an adversary F-16, as both aircraft race within 1,000 feet of each other, trying to force their opponent into vulnerable positions, on Thursday, May 2, 2024, above Edwards Air Force Base, Calif. The flight is serving as a public statement of confidence in the future role of AI in air combat. The military is planning to use the technology to operate an unmanned fleet of 1,000 aircraft. (AP Photo/Damian Dovarganes)

Still, Kendall says human oversight will always be at play when weapons are considered.

The Air Force is planning to have an AI-enabled fleet of over 1,000 AI-operated drones, with the first being in operation by 2028.

In March, the Pentagon said it was looking to develop newartificial intelligence-guided planes, offering two contracts for several private companies to compete against each other to obtain.

The Collaborative Combat Aircraft (CCA) project is part of a $6 billion program that will add at least 1,000 new drones to the Air Force. The drones will be designed to deploy alongside human-piloted jets and provide cover for them, acting as escorts with full weapons capabilities. The drones could also act as scouts or communications hubs, according to a report from The Wall Street Journal.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Air Force Secretary Frank Kendall smiles after a test flight of the X-62A VISTA aircraft against a human-crewed F-16 aircraft in the skies above Edwards Air Force Base, Calif., on Thursday, May 2, 2024. The flight on the Artificial Intelligence-controlled VISTA is serving as a public statement of confidence in the future role of AI in air combat. The military is planning to use the technology to operate an unmanned fleet of 1,000 aircraft. (AP Photo/Damian Dovarganes)

The companies bidding for the contract include Boeing, Lockheed Martin, Northrop Grumman, General Atomics and Anduril Industries.

Cost-cutting is one of the elements of AI that appeals to the Pentagon for pursuing the project.

In August 2023, Deputy Secretary of Defense Kathleen Hicks said deploying AI-enabled autonomous vehicles would provide "small, smart, cheap and many" expendable units to the U.S. military, helping overhaul the "too-slow shift of U.S. military innovation."

But the idea is to not fall too far behind China, which has modernized its air defense systems, which are much more sophisticated and put manned planes at risk when they get too close.

CLICK HERE TO GET THE FOX NEWS APP

Drones have the potential of interrupting such defense systems and could be used to jam them or provide surveillance for crews.

The Associated Press contributed to this report.

Greg Wehner is a breaking news reporter for Fox News Digital.

Story tips and can be sent to Greg.Wehner@Fox.com and on Twitter @GregWehner.

See original here:

US Air Force Secretary Kendall flies in cockpit of plane controlled by AI - Fox News

Read More..