Page 1,644«..1020..1,6431,6441,6451,646..1,6501,660..»

How legal departments can get the most out of artificial intelligence – Wolters Kluwer

This article by Abhishek Mittal, vice president of data and operational excellence at Wolters Kluwer,was originally published in Legal Dive.

Artificial intelligence (AI) is changing workflows in every corner of the businessand legal departments are no exception.

The term artificial intelligence was coined by computer scientist John McCarthy about 60 years ago.

Since then, AI has become one of the most promising technological innovations in the corporate world and beyond. Google CEO Sundar Pichai has even suggested that AI may be more impactful than the discovery of fire or the invention of electricity.

Much like a fire, though, AI doesnt keep burning on its own. Someone must build and train it. Thats why, a decade ago, Harvard Business Review declared data scientist the sexiest job of the 21st century.

Having worked with many brilliant data scientists, I find the job title to be a bit of a misnomer. To start, successful AI solutions require the right mix of design, data, and domain expertise.

Data scientists on their own cannot build AI models, just as AI models on their own cannot handle all decision-making. Thats why I refer to data scientists as decision scientists. Even with the advent of AI, decision-making is still in human hands at the end of the day.

Lets take a closer look at what that means in practice and how corporate legal departments can get the most out of the technology.

One of the biggest misperceptions about artificial intelligence is that it is going to replace people, which is simply not the case. Instead, legal professionals who use AI will replace legal professionals who do not.

Think of AI as an enabler, akin to the technology in smart cars. The car cannot drive itself, but it can help with specific tasks like backing up, parking, or changing lanes.

In the future, AI will be as ubiquitous as Microsoft Excel. But decision-making and review processes will still require validation by a human end user.

When my company was designing its AI-assisted legal invoice review, for instance, we first paired data scientists and domain experts to build the model. Then, we put the AI that they built to the test.

We gave one group of experts a set of invoices to review manually. We gave another group the same set of invoices but accompanied by AI scores provided by our newly built model. We did this repeatedly, so we could track the results over time.

The experts with AI were able to generate greater savings sometimes saving four times more than the control group. The AI acted as a highlighter, allowing them to focus on items that demanded greater due diligence. But humans were still part of the review process.

Once you understand that AI is not going to replace human talent, it becomes more obvious that you need the right people to get the most out of the technology.

In the beginning, we had more data scientists (as theyre commonly called) than we did domain experts. But domain experts are the ones who know which processes and customer experiences are best suited to be improved by AI.

Weve continued to grow our roster of domain experts, and they use AI more frequently than our data scientists.

Additionally, theyre the ones driving enhancements, as they have a better understanding of what problems need to be solved.

Not all companies can build their own AI models in-house, though. If youre looking to choose a partner, pick one with the most usage.

Ask potential partners how many customers are using their models and how many years of experience their models have. Many companies will throw out all the right buzzwords.

But a tried-and-true model is the key to getting the most out of artificial intelligence.

According to McKinsey, by the year 2025, data will be embedded in every decision, interaction, and process. But in the meantime, its important to prioritize use cases based on which problems are most suitable for AI.

To that end, ask yourself: What decision are we trying to improve? Are there a lot of transactions happening? Do we have sufficient data? Is there an opportunity to create a feedback loop?

Once again, the right mix of data scientists and domain experts is key to answering these questions.

In some cases, people use AI for quality assurance checks. Other times, its used for predictive insights. Regardless, its very important to analyze the why of your use case before you start building the model.

Our AI-assisted invoice review was an appealing use case because we had so much data on legal spend already. This gave us a huge head start when it came time to build our models.

The promise of artificial intelligence cannot be understated; it will be commonplace in corporate legal departments in no time. And yet, AI is not a plug-and-play solution thats going to make decisions for you.

Instead, it should enable your experts to make better decisions for themselves. AI isnt meant to replace human beings; its meant to augment their knowledge. Plan accordingly.

Read the rest here:
How legal departments can get the most out of artificial intelligence - Wolters Kluwer

Read More..

What Is Google Brain, and What Is Its Role in Artificial Intelligence … – MUO – MakeUseOf

For a long time, engineers and scientists sought to make artificial intelligence (AI) perform like the human brain. This feat became feasible with the creation of Google Brain, an AI research team, in 2011. So what does Google Brain entail, and what are its advancements and breakthroughs in AI?

The human brain is likely the most complex creationan intricate biological machine with many areas simultaneously performing different tasks. However, AI developers aim to make AI systems perform complex operations and solve problems like humans.

In 2011, Andrew Ng, a college professor, Jeff Dean, a Google fellow, and Greg Corrado, a Google Researcher, established Google Brain as a research team for exploring AI.

Initially, the team didn't have an official name; after Ng joined Google X, he began collaborating with Dean and Corrado to integrate deep learning processes into Google's existing infrastructure. Eventually, the team became a part of Google Research and was called "Google Brain."

The founding Brain team members sought to create intelligence that could independently learn from large amounts of data. They also aimed to address existing AI networks' challenges, including language understanding, speech, and image recognition.

In 2012, Google Brain encountered a breakthrough. The researchers fed millions of images obtained from YouTube into the neural network to train it on pattern recognition without prior information. After the experiment, the network recognized cats with a high degree of accuracy. This breakthrough paved the way for a wide range of applications.

Google Brain revolutionized how software engineers thought of AI, contributing significantly to its development. The Brain team has achieved tremendous results in many machine learning operationsits successes formed the foundation for AI's speech and image recognition and natural language processing.

One of the Brain team's most important contributions is the development of deep learning and the progression of Natural Language Processing (NLP).

NLP involves teaching computers human languages and helping them interact, delivering improved results with continued exposure. For instance, Google Assistant uses NLP to understand your queries and respond appropriately.

The Brain team has contributed to Computer Visionidentifying pictures and objects from visual data. In 2012, Google Brain introduced a neural network to classify images into 1000 categories. Presently, there are several unexpected uses for Computer Vision in use right now.

Google Brain also developed Neural Machine Translation (NMT). Before the introduction of the Brain team, most translation systems used statistical methods; Google's Neural Machine Translation was a significant upgrade.

The system translates whole sentences at once, resulting in more accurate translations that sound natural. Google Brain has also developed network models that can accurately transcribe speech.

The Brain team has pioneered a host of Google applications since its inception in 2011, including the following.

The Google Assistant, found in many smartphones today, provides personalized information, helps you set reminders and alarms, makes calls to various contacts, and even controls smart devices around the home.

This assistant relies on the machine learning algorithms provided by Google Brain to interpret speech and give an accurate response. With these algorithms, Google Assistant makes your life easier by learning your preferences and, after prolonged usage, understands you even better.

The Google Translate system uses Neural Machine Translation, which employs deep learning algorithms from Google Brain. This allows Google Translate to identify, understand, and accurately translate the text into the desired language.

NMT also uses a "sequence-to-sequence" modeling approach. This means phrases and whole sentences are translated in one go rather than word by word. Over time, as you interact with Google Translate, it gathers information, which allows it to provide more natural-sounding translations in the future.

If you need more insight, check out how to translate audio with Google Translate on your android phone.

While Google Photos is primarily a cloud-based photo and video storage application, it uses Google Brain's algorithms to organize and categorize media automatically. This lets Google Photos make it easier for you to manage your stored pictures. So, when you take a picture, Google Photos recognizes you, your friends, objects, and even landmarks and events present in the photo.

The application also adds tags to help you group the picture for future reference. This feature is particularly useful for finding and sharing memories with friends later.

Google Brain, since its inception, has dramatically expanded AI using top-notch neural network algorithms. The Brain team has contributed to speech and image recognition breakthroughs, machine learning frameworks, and natural language processing.

Read more here:
What Is Google Brain, and What Is Its Role in Artificial Intelligence ... - MUO - MakeUseOf

Read More..

SHIFT exhibit presents artistic perspectives on artificial intelligence – Boing Boing

Last week at Art Museum Stuttgart I saw a really cool exhibit called "SHIFT," which focused on various conceptualizations and manifestations of artificial intelligence (AI), and AI's profound ecological, sociological, political, and economic impacts on the world. The museum'swebsitedescribes the project:

The exhibition project developed by the Kunstmuseum Stuttgart and the Museum Marta Herford deals with artistic perspectives on artificial intelligence (AI). AI is based on algorithms that are already having an impact on political, economic and social processesboth visible and invisible. As a key technology of the 21st century, AI has long since arrived in mainstream society. It is associated with both hopes and concerns.

The term SHIFT underlines the thesis of the exhibition that this digital technology is permanently changing the idea of a community in which people, nature and technology are in a cooperative relationship.

Eight international artistic positions will be shown. The works make the complex interrelationships of AI comprehensible. They are based on investigations into real and artificial physicality, digital surveillance, biological intelligence and hybrid life forms, social power relations, language technology, immortality and NFTs. The artists always ask the question of ethical responsibility when dealing with AI.

Both the exhibition and the "Studio 11. Space for Art Education" offer the opportunity to deal fundamentally with AI. The extensive accompanying program was designed together with the Stuttgart Center for Simulation Science (SC SimTech) and the Cyber Valley Stuttgart / Tbingen.

One of the most interesting pieces in the exhibit was by artist Heather Dewey-Hagborg, whosewebsitedescribes her artistic practice:

Dr. Heather Dewey-Hagborg is an artist and biohacker who is interested in art as research and technological critique. Her controversial biopolitical art practice includes the projectStranger Visionsin which she created portrait sculptures from analyses of genetic material (hair, cigarette butts, chewed up gum) collected in public places.

We discussed her work here at Boing Boing, back in 2016, via thisinterviewwith Chelsea Manning, the subject of the piece I recently saw in Munich. "Probably Chelsea" consists of 30 sculptures of Manning's faceall created from one genetic sample from Manningsuspended from the ceiling, as if they were floating in air. I was astounded at how different they all wereit's hard to believe they were all derived from the same genetic sample, which is one of the problems with AI-focused data analysis that this piece raises. The placard at the exhibit described the work:

For "Probably Chelsea" she created thirty different versions of possible portraits of the US-American Chelsea E. Manning from one genetic sample. Manning, a soldier and IT specialist, had leaked classified data on U.S. activities during the war in Iraq and Afghanistan to the website WikiLeaks in 2010. She was arrested and sentenced to prison in 2013. With the sculptures, the artist gave her public visibility during her time in prison, during which Manning underwent gender affirming surgery. Dewey-Hagborg's work illustrates that the analysis of genetic and personal data, increasingly done through AI, is not always definite or leading to clear conclusions.

You can learn more about Dewey-Hagborg's bio-focused workhereand about her AI-focusedhere. You can also read about an earlier iteration of the Manning projecthere.

Here is the original post:
SHIFT exhibit presents artistic perspectives on artificial intelligence - Boing Boing

Read More..

Artificial Intelligence isn’t about to make us all jobless just yet NRTA – NatRoad

Original Article published in March/April 2023 Power Torque

BY WARREN CLARK*

Most of us are hearing a lot about Artificial Intelligence and the prospect of it replacing humans in many types of work.

Artificial Intelligence thats AI for short is the technology that enables machines (such as robots and computers) to think and act like humans.

And ChatGPT is an AI website that uses information it has learned to generate text in response to questions.

I asked it to write my column for this month and started by asking about Deals on Wheels:

When it comes to trucking, safety is a top priority. Even when it comes to saving money, you dont want to compromise safety. Here at Deals on Wheels, we understand this and strive to give our readers the best deals while still making sure theyre getting the highest quality of safety and reliability. For starters, we always look for the best deals on trucks, parts, and accessories. We make sure to only feature those that are from reputable manufacturers and are designed for the safe operation of a truckAt Deals on Wheels, safety is our top priority. We want to provide our readers with the best deals while still making sure theyre getting the highest quality of safety and reliability.

So far, so good.

I was on a roll.

It did get me thinking about the enormous impact technology is already having on our industry.

AI is already here in the heavy vehicle sector. ChatGPT told me so:

AI has been used in the trucking industry for many years, and its uses are becoming increasingly more sophisticated. AI can be used to automate navigation, loading, and unloading, as well as other tasks. It can also be used to optimize routes, reduce fuel costs, and minimize downtime. AI can also be used in truck maintenance and safety, as well as in predictive maintenance, which can save costs in the long run. AI can also be used to analyse data to improve customer service and provide customers with better insights into their operations. AI is also being used for driverless trucks, which could potentially reduce accidents, reduce traffic congestion, and improve efficiency.

Driverless trains are already here on the Sydney Metro. The New South Wales government has trialled an autonomous bus to run along a simple Point A to Point B route at Homebush Bay in Sydney.

Since 2021, Australian company Mineral Resources has been piloting an autonomous triple road train platoon in the Pilbara, with each combination capable of hauling 300 tonnes of iron ore.

The National Transport Commission has been developing a regulatory framework for some self-driving vehicles. The Automated Vehicle Safety Law is due in 2026 and is expected to lay out national standards for autonomous vehicles.

You might expect this will result in sweeping changes to how road transport operates.

Not so fast.

American researchers at two universities have been assessing the impact of autonomous trucking on operator hours. They say there will be a profound impact on long-haul trucking but it may take decades to come about.

In March last year, an autonomous 18-wheel truck spent more than five successive days hauling goods between Dallas and Atlanta, running around the clock to cover multiple journeys totalling 10,000 kilometres.

The same freight task would have taken a human driver (with mandated rest breaks) about 10 days.

Critically, a human safety driver was in the cab at all timesand had to take the wheel many times.

It seems AI is up to the task when its straightforward but has a lot to learn when conditions are tough. Another study highlights its issues in built-up environments where the sun doesnt shine so much.

Anyway, I was emboldened enough to ask ChatGPT how long before AI replaces my role:

It is difficult to predict exactly when a CEO of a lobby group will be replaced by AI, as it depends on a variety of factors, including the specific organization and its goals, the level of technological development in the field of AI, and the acceptance and adoption of AI within the industry. However, it is possible that in the future, AI could play a larger role in the operations of lobby groups, such as in the analysis of data and the development of strategies.

ChatGPT is unlikely to be able to attend meetings with Ministers any time soon, so I reckon my job is safe.

If youre a competent heavy vehicle operator whose skills are very much in demand right now, I suggest yours is too.

AI clearly has a role to play in many facets of road transport especially in route planning and management of bulk freight. Used properly, its going to make everybodys job easier.

But humans with skills, finely honed by years of experience, arent going to vacate the scene any time soon.

And remember: governments at all levels are still trying to work out how to get us into electric trucks.

See the original post:
Artificial Intelligence isn't about to make us all jobless just yet NRTA - NatRoad

Read More..

Approaching artificial intelligence through an interdisciplinary lens … – The University of North Carolina at Chapel Hill

Imagine sitting at home, scrolling through suggestions for what to watch next on Netflix, when your cellphone buzzes. You pick it up your Face ID unlocks the phone and you see an alert about suspicious credit card charges. You call the bank and go through a series of automated prompts that lead to the fraud department.

All of these activities are powered by artificial intelligence AI.

Whats possible nowadays seemed like magic 10 years ago, said Thomas Hofweber, a professor of philosophy and director of the AI Project at Carolina.

While AI has been around for decades, the rate at which the technology is progressing is unprecedented.

Its become more and more urgent, Hofweber said. Its happening now thats the feeling in the air.

That urgency has prompted the new initiative in the College of Arts and Sciences. Housed in the philosophy department and conducted in collaboration with computer science, linguistics and the Parr Center for Ethics, the AI Project is designed to advance research and collaboration on the philosophical foundations and significance of artificial intelligence and virtual worlds. The group began hosting events in February featuring a variety of approaches to the topic.

The inspiration for the partnership dates to Hofwebers time as a graduate student at Stanford University. While there, he joined the universitys Center for the Study of Language and Information and started learning about artificial intelligence.

This was the 90s and what people were doing with AI was very different from what it is now, he said.

Still, Hofweber saw the value of having a gathering space where people with different expertise could come together to discuss a common interest. For years, he has wanted to create the same kind of exchange at UNC, generating a more open dialogue and ultimately a more comprehensive understanding of artificial intelligence and its implications.

Peter Hase, a Ph.D. candidate in computer science, is excited to be involved with the AI Project.

I think its good to get out of our bubble as computer scientists, Hase said. It will be fun to bounce around ideas in this broader community.

According to Hase, the AI Project will help ensure that people in his department are using the right approaches in their research and sharing their findings and developments accurately.

Computer scientists are doing impressive things, Hase said. But they sometimes use certain terminology loosely or misuse it.

For example, a computer scientist may unintentionally anthropomorphize a chatbot by giving it a human name or gender. Such an action raises philosophical questions about how autonomy and morality should (or shouldnt) be applied to AI.

All these people have different expertise, Hofweber said. Computer scientists do the actual training and programming on language models but they dont spend years debating precisely how language works. Thats for the linguists to do.

Katya Pertsova is an associate professor of linguistics at UNC. For professionals in her field, she says a big question is: When are we going to have artificial intelligence that is capable of language to the same extent as humans?

I dont think we are there yet, Pertsova said. But were definitely closer to it.

While large language models like ChatGPT are very adept at generating certain kinds of text, they also fail spectacularly, according to Pertsova, as recent media coverage has shown.

But its only a matter of time until computer scientists correct those failures.

Its something that is going to change all of our lives very quickly, Pertsova said. Having multiple scholars interacting with this topic could lead to a better understanding of its issues.

In his course, AI and the Future of Humanity, Hofweber and his students regularly discuss some of these big issues, like: What is the moral status of an AI system? Do you need a biological basis for consciousness? What is the difference between AI and a non-human animal?

Hofweber is quick to point out that posing these types of questions is not a simple thought experiment.

There are a lot of philosophical issues that apply to general AI systems, and philosophical reflection can show us a great deal about how our minds work, he said. All of these debates are relevant.

As the AI Project grows and evolves, more departments could get involved. For now, Hofweber is excited to officially connect philosophers, linguists and computer scientists.

There are a lot of curious people working in this area, he said. Its great to bring them all together.

Learn more about the College of Arts and Sciences

Excerpt from:
Approaching artificial intelligence through an interdisciplinary lens ... - The University of North Carolina at Chapel Hill

Read More..

Microsoft brings artificial intelligence tool Copilot to Word, Teams – Mint Lounge

On 16 March, Microsoft announced its latest artificial intelligence (AI) integration with Microsoft 365 Copilot. Focused on workplace productivity, Copilot combines the power of large language models with business data and the Microsoft 365 apps to unleash creativity, unlock productivity and uplevel skills," according to a press statement.

Also read: All you need to know about Googles new AI features and GPT-4

SIMILAR STORIES

There are two main ways to use the GPT-4 powered Copilot, which is designed to work like an assistant. First, it is integrated with everyday apps such as Word, Excel, PowerPoint, Outlook, and Teams so users can access it easily. Second, Microsoft has also launched Business Chat. It works alongside your Microsoft 365 apps and data such as calendar emails, chats, documents, meetings, and contacts to enhance the experience and make it easier and faster.

For instance, you can use prompts such as tell your team how you updated a product strategy" and it will generate an update based on the mornings meetings, emails, and chat threads.

With our new copilot for work, were giving people more agency and making technology more accessible through the most universal interface natural language," Satya Nadella, Chairman and CEO of Microsoft said in a statement.

According to Microsoft, Copilot will fundamentally transform the way we work" and will turn your words into the most powerful productivity tool on the planet.

Copilot has been built to free up space and help people focus better on the tasks at hand, according to the statement. In Word, it writes, summarises and creates along with the user. It can give you a draft to edit or use as a starting point. Although Microsoft acknowledges Copilot might not always be right, it will help you save sourcing, writing, and editing time.

In PowerPoint, you can create vivid presentations with a single prompt and then add relevant content. In Excel, Copilot will help create data visualizations in a matter of seconds and identify trends.

Now, clearing your inbox won't leave you drained as Copilot can do that for you in minutes and help you manage it so that you can focus on the communication part. With the Copilot in Power Platform, you can automate repetitive tasks, create chatbots and turn an idea into a working app in minutes, according to the statement.

Microsoft is currently testing Copilot with select commercial customers.

Also read: Explained: Discord announces AI chatbot Clyde and more

Read the rest here:
Microsoft brings artificial intelligence tool Copilot to Word, Teams - Mint Lounge

Read More..

Workday’s Artificial Intelligence and Machine Learning Blitz – Acceleration Economy

Welcome to the Cloud Wars Minute yourdaily news and commentary show, hosted byCloud WarsFounderBob Evans. Each episode provides insights and perspectives around the reimagination machine that is the Cloud.

This episode is sponsored by Acceleration Economys Digital CIO Summit, taking place April 4-6. Register for the free event here. Tune in to the event to hear from CIO practitioners discuss their modernization and growth strategies.

In this Cloud Wars Minute, Bob Evans discusses Workdays 38 use cases that leverage artificial intelligence (AI) and machine learning (ML) technologies.

00:38 With the Golden Gate Bridge behind him, Bob uses it as a metaphor for what is happening in the business world straying from the traditional ways and moving into an AI and ML era.

01:08 Workday has just offered 38 use cases that leverage AI and ML technologies. Further details will be available in todays Cloud Wars News article.

01:28 The company has been an active and eager proponent of AI and ML for almost a decade, explains Bob; these technologies are one of the reasons the company has grown the way it has. It also contributed to the companys loyal customer base.

02:07 With Workday, AI and ML are not features. They are deeply woven into its products and applications, serving as a central part of who the company is.

02:33 Bob shares some key points from Workdays 38 use cases:

03:37 Executives are leaving the magic stuff relating to AI and ML behind, focusing on the growth, simplification, acceleration, and innovation that can arise from these technologies. Maybe thats the bridge that takes us into the ML and AI future, suggests Bob.

See the original post here:
Workday's Artificial Intelligence and Machine Learning Blitz - Acceleration Economy

Read More..

Designers’ dilemma: Embrace or evade the advancements of … – The Star Online

Machines might not be ready to replace humans just yet, but there's no escaping the fact that they're playing a growing role in certain fields. Design is one of them.

The sector is changing rapidly with the advent of so-called "generative" artificial intelligence, which designs objects based on textual descriptions.

This technology allows anyone to slip into the shoes of a designer and imagine products that could become part of our daily lives.

Eric Groza has tried his hand at this. The Dubai-based creative director decided to entrust the design of a fictional collaboration between the Swedish furniture giant Ikea and the California-based outdoor clothing brand Patagonia, to Midjourney, an artificial intelligence image generation programme.

He shared the results of his experiment on LinkedIn, in a post that spurred nearly 43,000 reactions. It features fictive creations such as camping chairs, an outdoor sofa and lamps that combine the visual identities of the two companies.

To come up with this capsule collection, Groza launched the generation of 200 different images in Midjourney. In the end, he selected just eight that he believes illustrate both brands' commitment to sustainability and environmental protection.

"For Ikea (this collaboration) would push the boundaries of their furniture to a new frontier. For Patagonia, it would showcase how their sustainable multifunctional comfort can be used indoors as well," he explained.

But creating this imaginary collection was not so straight-forward for the creative director. The first images generated by Midjourney were not very convincing because Groza had not yet mastered the art of the prompt, ie, the ability to formulate instructions sent to the artificial intelligence tool to enable it to create a visual from scratch.

The image generation software that has flooded onto the Internet in recent months, including DALL-E 2, Imagen, DreamBooth and Stable Diffusion, works on the basis of keywords and textual descriptions.

They use language understanding and learning models on very large amounts of data to design images from any written request.

But you still have to know how to formulate these requests correctly. And that's exactly where the challenge lies in using generative artificial intelligence.

Progress that raises questions

However, once the right prompt is found, the machine does all the work.

"Its magic, but requires someone at the controls to hold its hand," said Groza. The images of the Ikea/Pantagonia collaboration quickly went viral on the professional social network, where they drew mixed reactions.

Some praised the technological feat, while others objected to the use of image-generating software in the design industry. The latter fear that some of the work usually done by designers will be delegated to AI, a concern shared by many professionals in the creative industries.

The design industry has been grappling with these questions for years, even if generative artificial intelligence programmes are relatively recent.

In 2018, the US software company Autodesk began offering an "iterative design" service within its Netfabb Ultimate product design software.

Here, the user specifies requirements such as the material used or the production method, which the machine then takes into consideration to come up with ideas.

The user can then choose from the options available, or adapt the request until the desired result is obtained.

At the time, Jean-Louis Frechin, president of the Paris-based design agency NoDesign, said that the arrival of such tools would lead to a rethinking of the role of designers.

"This is an important evolution and design must seize this technology, because we don't see great human progress without new tools," he told the French financial newspaper Les Echos.

"The only danger is that engineers may imagine that they can model the creation with this always lingering idea of getting rid of designers."

Artificial intelligence is already being used to design many objects. Nasa researchers have used it to design optimised parts for various projects, including the Mars Sample Return space mission, which will bring back to Earth soil samples taken from the Red Planet by the Perseverance rover.

These metal structures have the advantage of being much lighter than those usually designed by humans and of supporting higher structural loads. They are also designed much more quickly, which significantly reduces manufacturing time.

"You can perform the design, analysis and fabrication of a prototype part, and have it in hand in as little as one week," explains Ryan McClelland, a research engineer at the Goddard Space Flight Center, in a statement. "It can be radically fast compared with how were used to working."

But, make no mistake, the designer's expertise remains essential to achieving a satisfactory final result.

"Human intuition knows what looks right, but left to itself, the algorithm can sometimes make structures too thin," said M. McClelland.

He adds: "The algorithms do need a human eye." AFP Relaxnews

See the original post:
Designers' dilemma: Embrace or evade the advancements of ... - The Star Online

Read More..

Navigating the Dangers of Artificial Intelligence in Modern Society – Northeastern University

LONDONDoes artificial intelligence have the capacity to be creative? Should AI be regulated by nation states? What are the limitations of AI in business?

These were just a few of the questions that a panel of experts explored as part of When Technology Goes Wrong: Navigating the Perils of AI in Modern Society. Held Tuesday at Devon House at Northeastern University London, the panel prompted attendees to think about the challenges presented by AI technologies.

Northeastern London professors on the panel included Alice Helliwel, Sian Joel-Edgar, Yu-Chun Pan, Tobias Hartung and Xuechen Chen. Each represented a different field that intersected with AI. Moderated by Alexandros Koliousis, an associate professor of computer science at Northeastern London, the panel inspired the over 100 attendees to think about AI from a variety of perspectivesphilosophy, mathematics, global politics and more.

Our panel today is a celebration of local talent, said Koliousis, also a research scientist at Northeasterns Institute of Experiential AI. The Northeastern University London faculty is at the forefront of interdisciplinary research.

Helliwell is an assistant professor in philosophy who specializes in the philosophy of artificial intelligence, ethics and aesthetics. She started the discussion with the question of whether artificial intelligence can be considered creative.

Like many questions in philosophy, she said, it depends on who you ask.

This kind of splits in two ways, Helliwell said. Some people think only humans can be creative, whereas others really think, Ive seen the outputs of these generative systems; they certainly seem to be creative.

In order to answer this question, she said, we need to define creativitybut this is also a question that isnt easy to answer. One theory is that something creative needs to have the impact of surprise, it needs to have novelty, and it needs to have value. Others would say the machine must have agency in order to be creative.

It kind of depends on which view you think is correct about what creativity is, she said.

Helliwell touched on the issue of AI-assisted creative work and whether that counts as being creative. For some artists, AI is actually limiting, she said.

Some artists also suggested that using an AI system is taking away some of their own artistic agency, she said. So theyve given it a go and they think that actually limits what theyre able to produce.

Many consumers are acutely aware of these limitations, according to Pan, an associate professor of digital transformation. He spoke about the business side of AI, specifically how its been incorporated into company operations.

Pan started by asking the audience whether theyd ever used a customer service chatbot and if it had given them the results they needed. The audience murmured.

It does, sometimes. Give it some credit, he said, laughing.

Studies suggest that 60% of the time a chatbot is used, human intervention is still required in order to solve the problem, Pan said.

We know those applications can be useful, and they arent useful sometimes, he said. People tend to find workarounds when they dont see the value in the AI thats implemented in businesses, he said.

But just because you can do something, doesnt mean you should.

Chen is an assistant professor of politics and international relations. She spoke about regulation, specifically whether digital spheres should be regulated by governments, or if cyberspace should be regulated by regional entities like the European Union.

One approach to this question is for multiple stakeholderslike European nations and the United States, for exampleto apply fundamental rules of democracy and human rights to the areas of cyberspace, as well. Other countries like China or Russia prefer a sovereignty-based approach and reprioritize, for instance, states rights, and they give more emphasis to the role of public authorities in government, she said.

Should the public be involved in creating these AI applications? Joel-Edgar talked it through.

Theres a wider question that humans should be involved because we decide whether the technology is good or bad, she said. She also talked through the issue of explainability and the expectation that AI technology is understood by the general public.

If you go to a doctor and he gives you a prescription, you dont ask him, well, can you explain? I dont have to ask him to explain, whats your credentials? she said. And so why do we expect that from AI solutions?

Hartung rounded out the panel. The assistant professor of mathematics discussed how AI learns and makes decisions. By way of example, if AI is shown a picture of a cat, it wont say definitively that it is a cat, he said. Instead, it could give an 85% chance that it is a cat, a 10% chance that its a dog, and a 5% chance that its an aircraft.

Really, what the AI does is it tries to learn a probability distribution so that if I give you some data, Hartung said. The AI says, OK, this is the most likely explanation of what this data is, he said.

Ultimately, the panelists left the audience with plenty of food for thought when it comes to the creativity of AI.

AI is an areanowadays, a meta-disciplinewhere development seems to move faster than society can keep up with, Koliousis said. New technology impacts society in complex ways, and its impact is not always uniformly positive. So, we need to stop and ask: What will be the impact on people?

See original here:
Navigating the Dangers of Artificial Intelligence in Modern Society - Northeastern University

Read More..

Autonomous hoeing thanks to artificial intelligence – Farmers Guide

Lemken and Steketee have introduced an AI-enhanced version of their automatic intra-row hoeing machine to the market.

Hoeing technology becomes even more effective if the machine itself is able to distinguish between crop plants and weeds and works in a targeted manner as a result. This is where artificial intelligence (AI) comes into play. Lemken and Steketee are now introducing the automatic intra-row hoeing machine IC-Weeder in an AI-enhanced version which reliably detects crop plants. This machine is therefore able to create clean fields even in sown crops with extensive weed infestation within rows.

To achieve the goal of autonomous plant recognition, the software integrated into the IC-Weeder AI first needed to learn certain features of crop plants and then combine them to produce complex contexts in a second step. This is possible thanks to an algorithm based on the principle of deep learning. For this particular machine, sugar beet plants were manually marked up at various stages of development. The algorithm then used this data to autonomously create a method for identifying sugar beet plants based on their colour profile, texture, shape, size and leaf position. This allows the hoeing machine to work even in challenging conditions that are too complex for conventional image recognition systems, as the system is able to differentiate clearly between crop plants and weeds.

In the Steketee IC-Weeder, the cameras for the individual rows are well protected and located inside a casing to ensure their reliable function without being affected byenvironmental light conditions. During a pass with the IC-Weeder AI, the cameras transmit 30 images per second to the on-board computer, producing a plant recognition ratio in excess of 95%. The sickle- shaped, pneumatically controlled knives move into the rows and hoe as close as two centimetres from each plant. A hydraulic parallel sliding frame ensures precise and reliable machine control in the crop.

With the Steketee IC-Weeder AI, Lemken aims to make hoeing even more precise and effortless. At the same time, this smart tillage implement prepares the path towards fully automated driving systems. Lemken UK will have IC-WeederAI machines available to demonstrate in spring 2023, initially for sugar beet and all other transplanted vegetable and salad crops with in row weeding capability in a working width of three metres.

Click on the video below to watch a short clip showing the IC-Weeder in action.

Link:
Autonomous hoeing thanks to artificial intelligence - Farmers Guide

Read More..