Page 1,492«..1020..1,4911,4921,4931,494..1,5001,510..»

Study sheds light on the dark side of artificial intelligence – Troy Media

Reading Time: 4 minutes

To understand how to get artificial intelligence right, we need to know how it can go wrong

Artificial intelligence is touted as a panacea for almost every computational problem these days, from medical diagnostics to driverless cars to fraud prevention.Vern Glaser

But when AI fails, it does so quite spectacularly, says Vern Glaser of the Alberta School of Business. In his recent study, When Algorithms Rule, Values Can Wither, Glaser explains how human values are often subsumed by AIs efficiency imperative, and why the costs can be high.

If you dont actively try to think through the value implications, its going to end up creating bad outcomes, he says.

Glaser cites Microsofts Tay as one example of bad outcomes. When the chatbot was introduced on Twitter in 2016, it was revoked within 24 hours after trolls taught it to spew racist language.

Then there was the robodebt scandal of 2015, when the Australian government used AI to identify overpayments of unemployment and disability benefits. But the algorithm presumed every discrepancy reflected an overpayment and automatically sent notification letters demanding repayment. If someone didnt respond, the case was forwarded to a debt collector.

By 2019, the program identified more than 734,000 overpayments worth two billion Australian dollars (C$1.8 billion).

The idea was that by eliminating human judgment, which is shaped by biases and personal values, the automated program would make better, fairer and more rational decisions at much lower cost, says Glaser.

But the human consequences were dire, he says. Parliamentary reviews found a fundamental lack of procedural fairness and called the program incredibly disempowering to those people who had been affected, causing significant emotional trauma, stress and shame, including at least two suicides.

While AI promises to bring enormous benefits to society, we are now also beginning to see its dark underbelly, says Glaser. In a recent Globe and Mail column, Lawrence Martin points out AIs dystopian possibilities, including autonomous weapons that can fire without human supervision, cyberattacks, deepfakes (a type of artificial intelligence used to create convincing images, audio and video hoaxes) and disinformation campaigns. Former Google CEO Eric Schmidt has warned that AI could easily be used to construct killer biological weapons.

Glaser roots his analysis in French philosopher Jacques Ellulsnotion of technique, offered in his 1954 book The Technological Society, by which the imperatives of efficiency and productivity determine every field of human activity.

Ellul was very prescient, says Glaser. His argument is that when youre going through this process of technique, you are inherently stripping away values and creating this mechanistic world where your values essentially get reduced to efficiency.

It doesnt matter whether its AI or not. AI, in many ways, is perhaps only the ultimate example of it.

Glaser suggests adherence to three principles to guard against the tyranny of technique in AI. First, recognize that because algorithms are mathematical, they rely on proxies, or digital representations of real phenomena.

One way Facebook gauges friendship, for example, is by how many friends a user has, or by the number of likes received on posts from friends.

Is that really a measure of friendship? Its a measure of something, but whether its actually friendship is another matter, says Glaser, adding that the intensity, nature, nuance and complexity of human relationships can easily be overlooked.

When youre digitizing phenomena, youre essentially representing something as a number. And when you get this kind of operationalization, its easy to forget its a stripped-down version of whatever the broader concept is.

For AI designers, Glaser recommends strategically inserting human interventions into algorithmic decision-making and creating evaluative systems that account for multiple values.

Theres a tendency when people implement algorithmic decision-making to do it once and then let it go, he says, but AI that embodies human values requires vigilant and continuous oversight to prevent its ugly potential from emerging.

In other words, AI is simply a reflection of who we are at our best and our worst. Without a good, hard look in the mirror, the latter could take over.

We want to make sure we understand whats going on so the AI doesnt manage us, he says. Its important to keep the dark side in mind. If we can do that, it can be a force for social good.

| By Geoff McMaster

This article was submitted by the University of Albertas Folio online magazine, a Troy Media Editorial Content Provider Partner.

The opinions expressed by our columnists and contributors are theirs alone and do not inherently or expressly reflect the views of our publication.

Troy MediaTroy Media is an editorial content provider to media outlets and its own hosted community news outlets across Canada.

Artificial Intelligence, Ethics, Machine learning

View post:
Study sheds light on the dark side of artificial intelligence - Troy Media

Read More..

Qualcomm’s ‘Cloud AI 100’ Beats Nvidia’s Best Artificial Intelligence … – Times of San Diego

Cards with Cloud AI 100 chips in a data center server. Image from Qualcomm video

Artificial intelligence chips from San Diegos Qualcomm beat those from Nvidia in two out of three measures of power efficiency in a new set of test data published on Wednesday.

Nvidia dominates the market for training AI models with huge amounts of data. But after those AI models are trained, they are put to wider use in what is called inference by doing tasks like generating text responses to prompts and deciding whether an image contains a cat.

Analysts believe that the market for data center inference chips will grow quickly as businesses put AI technologies into their products, but companies such as Google are already exploring how to keep the lid on the extra costs that doing so will add.

One of those major costs is electricity, and Qualcomm has used its history designing chips for battery-powered devices such as smartphones to create a chip called the Cloud AI 100 that aims for parsimonious power consumption.

In testing data published on Wednesday by MLCommons, an engineering consortium that maintains testing benchmarks widely used in the AI chip industry, Qualcomms AI 100 beat Nvidias flagship H100 chip at classifying images, based on how many data center server queries each chip can carry out per watt.

Qualcomms chips hit 197.6 server queries per watt versus 108.4 queries per watt for Nvidia. Neuchips, a startup founded by veteran Taiwanese chip academic Youn-Long Lin, took the top spot with 227 queries per watt.

Qualcomm also beat Nvidia at object detection with a score of 3.2 queries per watt versus Nvidias 2.4 queries per watt. Object detection can be used in applications like analyzing footage from retail stores to see where shoppers go most often.

Nvidia, however, took the top spot in both absolute performance terms and power efficiency terms in a test of natural language processing, which is the AI technology most widely used in systems like chatbots. Nvidia hit 10.8 samples per watt, while Neuchips ranked second at 8.9 samples per watt and Qualcomm was in third place at 7.5 samples per watt.

Read more here:
Qualcomm's 'Cloud AI 100' Beats Nvidia's Best Artificial Intelligence ... - Times of San Diego

Read More..

World Health Day: How Will AI Impact the Future of Healthcare … – The Weather Channel

Representational Image

A few decades from now, healthcare as we know it will see a fundamental shift. In fact, the transformation is already underway, driven by technological integration and innovation in healthcare. At the forefront of this revolution is the buzzword of the era: Artificial Intelligence (AI).

At this moment in the history of civilisation, it is 'virtually' impossible to visualise modern life without artificial intelligence enabling our day-to-day life in some way or the other. From social media and self-driving cars to classrooms and homes, AI is everywhere!

On this World Health Day, let's take a look at what could be the future of 'Health For All,' with digitisation, emerging technologies and AI leading the way.

At the outset, three megatrends are driving AI innovation in healthcare, as highlighted in this years World Economic Forum Report.

Firstly, what we have before us is a 'data deluge' flooding the medical systems. The doubling time for medical knowledge in 1950 was 50 years. In 2020 it was just 73 days! With some technological help, this immense data from new findings to day-to-day patient information can be streamlined to suit our needs.

Moreover, when such data is fed into digital systems, we have a vast repository to train machines to aid with diagnosis and treatment, improving accuracy, reducing errors, providing early detection, and also predicting the risk of life-threatening diseases well in advance. For instance, the most common form of pancreatic cancer has a five-year survival rate of less than 10%; but with earlier detection, it's 50%!

Secondly, these technologies are envisioned to affect not only patient care, but also ease the burdens of healthcare professionals when faced with novel problems they haven't witnessed before. A prime example is the COVID-19 pandemic, which pushed global healthcare systems to the brink during its peak.

And thirdly, we are in the midst of a technological renaissance, the floodgates to which have been opened with the launch of Chat GPT a language model trained on massive volumes of internet texts. And early adopters are already on it! Chat GPT has become a high-level technological assistant to medical professionals, aiding with mundane tasks such as medical paperwork, patient certificates and letters.

But it could also aid in more serious medical activities such as triage, that is, moving people, resources and supplies to where they are needed most. It could help with research studies as well, streamlining tasks like the selection and enrollment of participants in clinical trials.

At the same time, we must remember that there are profound ethical implications associated with such advancements. The first and foremost concerns stem from privacy and confidentiality the foundation of doctor-patient relationships.

An article published in The Conversation states: "If identifiable patient information is fed into Chat GPT, it forms part of the information that the chatbot uses in future. In other words, sensitive information is out there and vulnerable to disclosure to third parties."

Another concern pertains to the efficiency and quality of such databases. Outdated references won't cut when it comes to sensitive sectors like healthcare. This calls for plugging such databases with robust designs that provide accurate real-time references.

Finally, we have the issues of equity and governance looming before us. More often than not, the benefits and risks of emerging technologies tend to be unevenly distributed between countries, especially in the absence of strong global guidelines.

It isn't easy to gauge the exact implications of artificial intelligence and emerging technologies from where we stand, but chances are it will get clearer as its use increases in the future. However, addressing the ethical concerns plaguing the sector should be a priority for governments worldwide going forward.

**

For weather, science, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!

Original post:
World Health Day: How Will AI Impact the Future of Healthcare ... - The Weather Channel

Read More..

In A.I. Race, Microsoft and Google Choose Speed Over Caution – The New York Times

In March, two Google employees, whose jobs are to review the companys artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similarconcerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbotwoven into its Bing search engine. Google followed about six weeks later withits own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industrys next big thing generative A.I., the powerful new technology that fuels those chatbots.

That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new A.I. was crystallized in aninternalemail sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an absolutely fatal error in this moment to worry about things that can be fixed later.

When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product is the long-term winner just because they got started first, he wrote. Sometimes the difference is measured in weeks.

Last week, tension between the industrys worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apples co-founder Steve Wozniak, calledfor a six-month pause inthe development of powerful A.I. technology. In a public letter, they said it presented profound risks to society and humanity.

Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. In the United States, President Biden on Tuesday became the latest official to question the safety of A.I.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Tech companies have a responsibility to make sure their products are safe before making them public, he said at the White House. When asked if A.I. was dangerous, he said: It remains to be seen. Could be.

The issues being raised now were once the kinds of concerns that prompted some companies to sit on new technology. They had learned that prematurely releasing A.I. could be embarrassing. Five years ago, for example, Microsoft quickly pulled a chatbot called Tay after users nudged it to generate racist responses.

Researchers say Microsoft and Google are taking risks by releasing technology that even its developers dont entirely understand. But the companies said that they had limited the scope of the initial release of their new chatbots, and that they had built sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.

Natasha Crampton, Microsofts chief responsible A.I. officer, said in an interview that six years of work around A.I. and ethics at Microsoft had allowed the company to move nimbly and thoughtfully. She added that our commitment to responsible A.I. remains steadfast.

Google released Bard after years of internal dissent over whether generative A.I.s benefits outweighed the risks. It announced Meena, asimilarchatbot, in 2020. But that system was deemed too risky to release, three people with knowledge of the process said. Those concerns were reported earlier by The Wall Street Journal.

Later in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language. The researchers were pushed out after Ms. Gebru criticized the companys diversity efforts and Ms. Mitchell was accused of violating its code of conduct after she saved some work emails to a personal Google Drive account.

Ms. Mitchell said she had tried to help Google release products responsibly and avoid regulation, but instead they really shot themselves in the foot.

Brian Gabriel, a Google spokesman, said in a statement that we continue to make responsible A.I. a top priority, using our A.I. principles and internal governance structures to responsibly share A.I. advances with our users.

Concerns over larger modelspersisted. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper.

Mr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because theyve probably had access to private data stored in various locations around the internet.

Though an executive presentation later warned of similar A.I. privacy violations, Google reviewers asked Mr. El Mhamdi for substantial changes. He refused and released the paper through cole Polytechnique.

He resigned from Google this year, citing in part research censorship. He said modern A.I.s risks highly exceeded the benefits. Its premature deployment, he added.

AfterChatGPTs release, Kent Walker, Googles top lawyer, met with research and safety executives on the companys powerful Advanced Technology Review Council. He told them that Sundar Pichai, Googles chief executive, was pushing hard to release Googles A.I.

Jen Gennai, the director of Googles Responsible Innovation group, attended that meeting. She recalled what Mr. Walker had said to her own staff.

The meeting was Kent talking at the A.T.R.C. execs, telling them, This is the company priority, Ms. Gennai said in a recording that was reviewed by The Times. What are your concerns? Lets get in line.

Mr. Walker told attendees to fast-track A.I. projects, though some executives said they would maintain safety standards, Ms. Gennai said.

Her team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them and enable tech-facilitated violence through mass harassment online.

In March, two reviewers from Ms. Gennais team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready.

Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bards risks, the people said.

Ms. Gennai said in an email to The Times that because Bard was an experiment, reviewers were not supposed to weigh in on whether to proceed. She said she corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.

Google said it had released Bard as a limited experiment because of those debates, and Ms. Gennai said continuing training, guardrails and disclaimers made the chatbot safer.

Google released Bard to some users on March 21. The company said it would soon integrate generative A.I. into its search engine.

Satya Nadella, Microsofts chief executive, made a bet on generativeA.I. in 2019 when Microsoft invested $1 billion in OpenAI. After deciding the technology was ready over the summer, Mr. Nadella pushed every Microsoft product team to adopt A.I.

Microsoft had policies developed by its Office of Responsible A.I., a team run by Ms. Crampton, but the guidelines were not consistently enforced or followed, said five current and former employees.

Despite having a transparency principle, ethics experts working on the chatbotwere not given answers about what data OpenAI used to develop its systems, according to three people involved in the work. Some argued that integrating chatbots into a search engine was a particularly bad idea, given how it sometimesserved up untrue details, a person with direct knowledge of the conversations said.

Ms. Crampton said experts across Microsoft worked on Bing, and key people had access to the training data. The company worked to make the chatbot more accurate by linking it to Bing search results, she added.

In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to four people familiar with the team.

The remaining few joined daily meetings with the Bing team, racing to launch the chatbot. John Montgomery, an A.I. executive, told them in a December email that their work remained vital and that more teams will also need our help.

After the A.I.-powered Bing was introduced, the ethics team documented lingering concerns. Users could become too dependent on the tool. Inaccurate answers could mislead users. People could believe the chatbot, which uses an I and emojis, was human.

In mid-March, the team was laid off, an action that was first reported by the tech newsletter Platformer. But Ms. Crampton said hundreds of employees were still working on ethics efforts.

Microsoft has released new products every week, a frantic pace to fulfill plans that Mr. Nadella set in motion in the summer when he previewed OpenAIs newestmodel.

He asked the chatbotto translate the Persian poet Rumi into Urdu, and then English. It worked like a charm, he said in a February interview. Then I said, God, this thing.

Mike Isaac contributed reporting. Susan C. Beachy contributed research.

More:
In A.I. Race, Microsoft and Google Choose Speed Over Caution - The New York Times

Read More..

How Artificial Intelligence is Shaking Up the Music Industry – The National Law Review

You are responsible for reading, understanding and agreeing to the National Law Review's (NLRs) and the National Law Forum LLC's Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free to use, no-log in database of legal and business articles.The content and links on http://www.NatLawReview.comare intended for general information purposes only. Any legal analysis, legislative updates or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys or other professionals or organizations who include content on the National Law Review website.If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.

Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals.The National Law Review is not a law firm nor is http://www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.

Under certain state laws the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.

The National Law Review - National Law Forum LLC 3 Grant Square #141 Hinsdale, IL 60521 Telephone (708) 357-3317 ortollfree(877)357-3317. If you would ike to contact us via email please click here.

See the original post here:
How Artificial Intelligence is Shaking Up the Music Industry - The National Law Review

Read More..

Hankook, Amazon Web Services and Snowflake partner to establish … – Tire Technology International

To speed up the development of an integrated artificial intelligence (AI) platform, Hankook and Hankook Tire & Technology have formed a collaboration with Amazon Web Services (AWS) and Snowflake, a cloud-based SaaS software which efficiently stores, processes and analyzes large volumes of data.

Through the collaboration the group aims to establish an optimized data analysis infrastructure which incorporates cloud-based solutions and platform expertise. The project seeks to create an integrated data analysis environment utilizing the very latest AI and machine learning (ML) technologies. During the project, AWS data lake environment and analysis infrastructure will be used, including SageMaker and AutoML. Additionally, Snowflake will act as a data warehouse for the promotion of cloud migration and digital transformation.

At present, Hankook is conducting a project to build an integrated data analysis platform in a cloud environment. The newly developed platform will provide an environment for collecting and integrating internal data, including research and development, production and quality data from Hankook Tire. It will also be capable of collecting external data such as mobility data and Voice of Customer data. The platform will enable integrated analysis of this data.

The newly built platform will be used by the tire manufacturer as a tool to enhance tire performance and quality. It will also be used to integrate and analyze data from performance tests and customer feedback on Hankooks iON series of electric vehicle tires to enhance performance. By analyzing production data, Hankook aims to improve the efficiency of new product development while addressing quality issues.

With the development of generative AI, AI is passing through an inflection point: there is a need for a platform that can collect and freely use internal and external data to apply it to businesses, said Seong-jin Kim, chief digital officer and chief information officer, Hankook. By utilizing the excellent technology and solutions of Amazon Web Services and Snowflake, we will maximize our AI capabilities and accelerate our journey to enhance product competitiveness in areas such as electric vehicle and smart tires.

See the original post:
Hankook, Amazon Web Services and Snowflake partner to establish ... - Tire Technology International

Read More..

At CAGR 47.90% | Artificial Intelligence in Manufacturing Market To Develop Strongly And Cross USD 154.6 Bn – EIN News

Artificial Intelligence in Manufacturing Market Size Reached USD 3.4 Bn in 2023, to reach USD 154.6 Bn by 2032, exhibiting a (CAGR) of 47.90% during 2023-2033

Artificial Intelligence in Manufacturing Market Size Reached USD 3.4 Billion in 2023, to reach USD 154.6 Billion by 2032, exhibiting a growth rate (CAGR) of 47.90% during 2023-2033

The industry's behavior is discussed in detail. It also outlines the future direction that will ensure strong profits over the coming years. This report will provide a practical overview of the global market and its changing environment to help readers make informed decisions about market projects. This report will focus on that will allow the market to expand its operations in existing markets.

: https://market.us/report/artificial-intelligence-in-manufacturing-market/request-sample

(Use Company eMail ID to Get Higher Priority)

This report helps both to analyze the market in-depth. This will help the leading players decide on their business strategy and set goals. This report provides critical market information, including Artificial Intelligence in Manufacturing market size, growth rates and forecasts in key regions and countries, as well as growth opportunities in niche markets.

The Artificial Intelligence in Manufacturing report contains data based on - using proven research methods. This report provides all-around information that aids in the estimation of every part of the Artificial Intelligence in Manufacturing market. This report was created by considering several aspects of market research and analysis. These include market size estimates, market dynamics, company and market best practices. Entry-level marketing strategies, positioning, segmentation, competitive landscaping and economic forecasting. Industry-specific technology solutions, roadmap analysis, targeting key buying criteria and in-depth benchmarking of vendor offerings.

Siemens Intel Corporation NVIDIA Corporation Alphabet IBM Corporation Microsoft Corporation General Electric Company DataRPM Sight Machine General Vision AIBrain Rockwell Automation Cisco Systems Mitsubishi Electric Corporation Oracle Corporation SAP SE Preferred Networks Vicarious Skymind Citrine Informatics CloudMinds Technologies

Artificial Intelligence in Manufacturing Based on Type:

Deep Learning Computer Vision Context Awareness Natural Language Processing

Artificial Intelligence in Manufacturing By Application

Semiconductor and Electronics Energy and Power Pharmaceuticals Automobile Heavy Metals and Machine Manufacturing Food and Beverages

:

- North America (the U.S and Canada and the rest of North America)

- Europe (Germany, France, Italy and Rest of Europe)

- Asia-Pacific (China, Japan, India, South Korea and Rest of Asia-Pacific)

- LAMEA ( Artificial Intelligence in Manufacturingzil, Turkey, Saudi Arabia, South Africa and Rest of LAMEA)

Interested in Procure The Data? Inquire here at: https://market.us/report/artificial-intelligence-in-manufacturing-market/#inquiry

:

1. Industry trends (2015-2020 historic and future 2022-2031)

2. Key regulations

3. Technology roadmap

4. Intellectual property analysis

5. Value chain analysis

6. Porters Five Forces Model, PESTLE and SWOT Analysis

:

How is the Artificial Intelligence in Manufacturing market along with regions like , , -, are growing?

What - are responsible for driving market growth?

What are the of Cognitive Mediamarket? What growth prospects are there for the market applications?

What stage are the key products on the Artificial Intelligence in Manufacturing market?

What are the challenges that the Global (North America and Europe and Asia-Pacific and South America) must overcome to be commercially viable? Are their growth and commercialization dependent on cost declines or technological/application breakthroughs?

What are the prospects for the Artificial Intelligence in Manufacturing Market?

What is the difference between performance characteristics of Artificial Intelligence in Manufacturing and established entities?

1. Artificial Intelligence in Manufacturing market provides an analysis of the .

2. are involved to help businesses make informed decisions.

3. - for Artificial Intelligence in Manufacturing Market.

4. It allows you to understand the key product segments.

5. Market.us team shed light on market dynamics such as , .

6. It provides a regional analysis of the Artificial Intelligence in Manufacturing Market as well as business profiles for several stakeholders.

7. It provides massive data on trending factors that can influence the development of the Artificial Intelligence in Manufacturing Market.

Artificial Intelligence in Manufacturing , : https://market.us/report/artificial-intelligence-in-manufacturing-market/

Explore More Market Analysis Reports from Our Trusted Sources -

https://www.globenewswire.com/en/search/organization/market.us

https://www.einpresswire.com/newsroom/market_us/

https://www.linkedin.com/in/aboli-more-511793114/recent-activity/shares/

About Market.us

Market.US provides customization to suit any specific or unique requirement and tailor-makes reports as per request. We go beyond boundaries to take analytics, analysis, study, and outlook to newer heights and broader horizons. We offer tactical and strategic support, which enables our esteemed clients to make well-informed business decisions and chart out future plans and attain success every single time. Besides analysis and scenarios, we provide insights into global, regional, and country-level information and data, to ensure nothing remains hidden in any target market. Our team of tried and tested individuals continues to break barriers in the field of market research as we forge forward with a new and ever-expanding focus on emerging markets.

:

Global Business Development Teams - Market.us

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:inquiry@market.us

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Explore More Report Here:

Safflower Oil Market Sales to Top USD 359.6 Million in Revenues by 2033 at a CAGR of 4.5% | Data By Market.ushttps://www.einpresswire.com/article/625752346/safflower-oil-market-sales-to-top-usd-359-6-million-in-revenues-by-2033-at-a-cagr-of-4-5-data-by-market-us

Rugged Smartphones Market [+Up To 45% OFF] | Value to Hit USD 481.8 billion in 2033, At a CAGR of 17.1% | Market.ushttps://www.einpresswire.com/article/625753213/rugged-smartphones-market-up-to-45-off-value-to-hit-usd-481-8-billion-in-2033-at-a-cagr-of-17-1-market-us

Teicoplanin Market Sales Revenue, Key Players Analysis, Development Status, Opportunity Assessmenthttps://www.einpresswire.com/article/625753583/teicoplanin-market-sales-revenue-key-players-analysis-development-status-opportunity-assessment

Chromatography Columns Market To Offer Numerous Opportunities At A CAGR Of 5.84% through 2032https://www.einpresswire.com/article/625754272/chromatography-columns-market-to-offer-numerous-opportunities-at-a-cagr-of-5-84-through-2032

Uveitis Treatment Market [+Up To 45% OFF] | To Offer Numerous Opportunities At A CAGR Of 6.7% through 2033https://www.einpresswire.com/article/625754779/uveitis-treatment-market-up-to-45-off-to-offer-numerous-opportunities-at-a-cagr-of-6-7-through-2033

Distribution Boards Market Size is projected to grow at a CAGR of 6.4% https://www.einpresswire.com/article/625754912/distribution-boards-market-size-is-projected-to-grow-at-a-cagr-of-6-4

Aircraft Market [+Up To 45% OFF] | Revenue to Cross USD 219.4 Billion, Globally, by 2033, At a CAGR of 2.2%https://www.einpresswire.com/article/625755371/aircraft-market-up-to-45-off-revenue-to-cross-usd-219-4-billion-globally-by-2033-at-a-cagr-of-2-2

Flexographic Printing Inks Market Size Set to Skyrocket with Projected CAGR of 4.0% from 2022 to 2032https://www.einpresswire.com/article/625755673/flexographic-printing-inks-market-size-set-to-skyrocket-with-projected-cagr-of-4-0-from-2022-to-2032

Artificial Intelligence (AI) in Agriculture Market To Develop Speedily With CAGR Of 26.7% By 2032 https://www.linkedin.com/pulse/artificial-intelligence-ai-agriculture-market-develop-aboli-more/

Dental Crowns And Bridges Market Value to Hit USD 466 Mn by 2032 | CAGR of 4.8% https://www.linkedin.com/pulse/dental-crowns-bridges-market-value-hit-usd-466-mn-2032-aboli-more/

Artificial Intelligence (AI) In Construction Market Sales to Top USD 17.57 Bn by 2032 | CAGR of 35.70% https://www.linkedin.com/pulse/artificial-intelligence-ai-construction-market-sales-top-aboli-more/

Neuroendoscopy Devises Market Sales to Top USD 326 Million in Revenues by 2032 at a CAGR of 5.3% https://www.linkedin.com/pulse/neuroendoscopy-devises-market-sales-top-usd-326-million-aboli-more/

Pharmaceutical excipients Market Value to Hit USD 11.8 bn by 2032 | CAGR 5.8% https://www.linkedin.com/pulse/pharmaceutical-excipients-market-value-hit-usd-118-bn-aboli-more/

Needles Market To Grow Steadily With An Impressive CAGR Of 7.3% During The Forecast Period From 2023 To 2032: Market.UShttps://www.linkedin.com/pulse/needles-market-grow-steadily-impressive-cagr-73-during-aboli-more/

Medical Transcription Software Market: Share Analysis. Demand and Sales Forecasts By 2033 https://www.linkedin.com/pulse/medical-transcription-software-market-share-analysis-demand-more/

Neonatal Intensive Care Respiratory Devices Market: Growth, Trends, and Forecast 2023 to 2032 https://www.linkedin.com/pulse/neonatal-intensive-care-respiratory-devices-market-growth-aboli-more/

Discovering Benefits of Investing in Spill Pallets Market Space https://www.linkedin.com/pulse/discovering-benefits-investing-spill-pallets-market-space-aboli-more/

Business Development Team Market.usPrudour Pvt Ltd+1 718-618-4351email us hereVisit us on social media:FacebookTwitterLinkedIn

Go here to see the original:
At CAGR 47.90% | Artificial Intelligence in Manufacturing Market To Develop Strongly And Cross USD 154.6 Bn - EIN News

Read More..

Facebook parent Meta touts Artificial Intelligence robot that can learn from humans – Yahoo Finance

Meta announced two advancements towards developing AI robots that can perform "challenging sensorimotor skills."

In a press release on Friday, the company announced that it has developed a way for robots to learn from interactions from real-world humans "by training a general-purpose visual representation model (an artificial visual cortex) from a large number of egocentric videos."

The videos come from an open source dataset from Meta, which the company says shows people doing everyday tasks such as "going to the grocery store and cooking lunch."

One way that Meta's Facebook AI Research (FAIR) team is working to train the robots is by developing an artificial visual cortex, which in humans, is the region of the brain that enables individuals to convert vision into movement.

TECH EXPERTS SLAM LETTER CALLING FOR AI PAUSE THAT CITED THEIR RESEARCH: 'FEARMONGERING'

Meta announced two advancements towards developing AI robots that can perform "challenging sensorimotor skills."

The dataset that is used to teach the robots, Ego4D, contains "thousands of hours of wearable camera video" from people participating in the research that perform daily activities such as cooking, sports, cleaning, and crafts.

READ ON THE FOX BUSINESS APP

According to the press release, the FAIR team created "CortexBench," which consists of "17 different sensorimotor tasks in simulation, spanning locomotion, navigation, and dexterous and mobile manipulation."

"The visual environments span from flat infinite planes to tabletop settings to photorealistic 3D scans of real-world indoor spaces," the company says.

AI PAUSE GIVES 'BAD GUYS' TIME TO CATCH UP, BILL ACKMAN SAYS: 'I DON'T THINK WE HAVE A CHOICE'

When used on the Spot robot, Meta says that ASC achieved "near perfect performance" and succeeded on 59 of 60 episodes, being able to overcome "hardware instabilities, picking failures, and adversarial disturbances like moving obstacles or blocked paths."

In announcing the second development, Meta's FAIR team says that it has used adaptive (sensorimotor) skill coordination (ASC) on a Boston Dynamics' Spot robot to "rearrange a variety of objects" in a "185-square-meter apartment and a 65-square-meter university lab."

When used on the Spot robot, Meta says that ASC achieved "near perfect performance" and succeeded on 59 of 60 episodes, being able to overcome "hardware instabilities, picking failures, and adversarial disturbances like moving obstacles or blocked paths."

Story continues

Video shared by Meta shows the robot moving various objects from one location to another.

CLICK HERE TO READ MORE ON FOX BUSINESS

In announcing the second development, Meta's FAIR team says that it has used adaptive (sensorimotor) skill coordination (ASC) on a Boston Dynamics' Spot robot to "rearrange a variety of objects" in a "185-square-meter apartment and a 65-square-meter university lab."

The FAIR team says it was able to achieve this by teaching the Spot robot to "move around an unseen house, pick up out-of-place objects, and put them in the right location."

When tested, the Spot robot used "its learned notion of what houses look like" to complete the task of rearranging objects.

More here:
Facebook parent Meta touts Artificial Intelligence robot that can learn from humans - Yahoo Finance

Read More..

Crypto Decentralization Identified as U.S. Security Risk by Treasury Department – U.Today

Alex Dovbnya

The United States Treasury Department has issued a report warning of the growing security risks associated with the decentralized finance (DeFi) sector, highlighting vulnerabilities that illicit actors exploit for money laundering and evasion of regulator

Read U.TODAY on

Google News

The United States Treasury Department has recently published a report emphasizing the potential security threats arising from the expanding decentralized finance (DeFi) industry.

The report points out the vulnerabilities in DeFi services that are being exploited by malicious actors, including ransomware hackers and North Korean cyber operatives, to launder illegal funds.

Moreover, the report highlights the difficulties in supervising and enforcing obligations related to anti-money laundering and countering terrorist financing due to the often obscure nature of DeFi services' organizational structures.

The assessment reveals that numerous DeFi services do not comply with existing rules. Consequently, the report suggests intensifying supervision and enforcement effortsas well as engaging more with the industry to clarify how current regulations apply to DeFi services.

The report also mentions that if a DeFi service is not classified as a financial institution under the Bank Secrecy Act (BSA), it may create a potential vulnerability as there is a lower chance of implementing anti-money laundering and terrorist financing measures.

Lastly, the assessment urges increased cooperation with international partners to promote the adoption of global anti-money laundering and counter-terrorist financing standards, along with better cybersecurity practices among digital asset firms.

View post:

Crypto Decentralization Identified as U.S. Security Risk by Treasury Department - U.Today

Read More..

How DAOs Factor Into Litigation, Enforcement, and Restructuring – Bloomberg Law

Following the collapse of FTX and the landmark Commodity Futures Trading Commission case concerning Ooki DAO, corporate law questions around decentralized autonomous organizations are crystalizing through court cases and regulatory enforcement actions. Meanwhile, prices for cryptoassets have begun to rebound.

If and when the crypto winter fades into spring, it could bring along renewed interest in participating in DAOs. For users and investors to safely do so, much still depends on closely watched current court cases that test the parameters of open legal questions around DAOs.

DAOs are an emerging type of corporate structure designed to fit the decentralized culture behind blockchain technology. Unlike traditional corporate structures such as limited liability corporations, which centralize decision-making, the DAO structure exists to decentralize control, vesting decision-making power in the tokenholders themselves.

This formation challenges the definition of what legal personhood is intended to do, prompting potential investors, the CFTC, and the Securities and Exchange Commission to ask questions such as: who bears the burden of accountability, who owes a fiduciary duty to investors, how does a DAO take action and who can effectuate that action on its behalf, who are the equity holders, and how do you know with whom you have co-invested in a DAO?

Several recent and ongoing cases working their way through the courts have begun to shed light on possible answers to these questions. Below are a select few that investors, regulators and industry lawyers are watching closely:

The CFTCs enforcement action found that the DAO is a general partnership or unincorporated association, as a matter of state law, and that a DAO was a legal person subject to suit. It also found that tokenholders who actively participate in governance are general partners and thus liable for the activities of the DAO, and therefore they can be liable for the debts and regulatory violations of the DAO.

The DAO founders thought they couldnt be sued because they formed a DAO. This enforcement action shows that the reach of the government extends to activities committed in the blockchain space, whether through DAOs, on decentralized exchanges, or otherwise.

Multiple coordinated enforcement actions filed over the past month by the SEC, Department of Justice, and CFTC accuse trader Avraham Eisenberg of violating federal law by fraudulently manipulating the price of Mango DAOs MNGO token to unlawfully obtain over $110 million in digital assets.

Eisenberg publicly defended his conduct on Twitter after he stole from the Mango protocol, saying that because the flawed computer code of the protocol let him do it, he was allowed to do it.

Through its enforcement actions, the government has countered with a different interpretation: Code is code, but law is law. Simply taking actions within the parameters of a code will not shield token holders from legal liability if such actions otherwise violate the law.

This first-of-its-kind private class action alleges that bZx DAO, its co-founders, and members failed to adequately secure their funds, resulting in the theft of $55 million. This suit is seeking class action certification for damages caused by these alleged actions.

A March 27 ruling denied the motion to dismiss filed by members of the DAO who held governance tokens, finding that: the DAO is plausibly alleged to be a general partnership, the US founders attempted to avoid US law by changing the corporate form from an LLC to a DAO, and the members holding governance tokens may have a duty of care to the other token holders, and corresponding liability for failure to secure the DAOs sensitive information.

The biggest concern here for DAOs is that even losses which are too small for any one investor to economically sue over can be aggregated together into a class action suit, leaving the DAO staring down the barrel of private recoupment of losses.

This private class action filed by a software engineer alleges that PoolTogether, a blockchain-based app that encourages users to save their cryptocurrencies by offering awards, is essentially a lottery and therefore prohibited under New York Law. The plaintiffs attorneys are increasingly trying to apply broader laws and tactics to go after DAOs and their token holders.

As DAOs increase in total assets, they will become bigger and easier targets for these types of claims. It is more critical now for DAOs to retain good legal counsel and work within the parameters of their advice to mitigate their potential losses.

So where does all this leave current or potential DAOs? DAOs are not immune from federal law or lawsuits. Therefore investors and participants in DAOs must act with an appropriate level of skepticism and cautionand in coordination with good legal counsel.

We also have more clarity on the question of liability. The Ooki DAO case demonstrates that courts appear to be leaning toward finding that DAOs are unincorporated associations with general partners. If that interpretation holds, tokenholders who actively participate in governance may be personally liable under general partnership liability doctrines.

But the dissenting opinion in the Ooki case lays out several alternate theories of liability that the CFTC or SEC could rely onsuch as control person or aiding and abetting, which go beyond this current updatethat one could argue better apply to tokenholders. We will have to wait for more decisions to learn whether the general partnership theory will be applied instead of these potential theories of liability.

And simply because the government attempts one theory of liability does not mean that private plaintiffs cant attempt to recover losses under alternative theories.

Another conclusion we might draw from these rulings is that the risk level for tokenholders may depend on their level of participation. If there is a direct vote on an action that results in liabilitytokenholders approve a self-dealing transactionit seems more likely that a court would find a higher level of culpability.

On the other hand, when management takes action on its own, tokenholders who were not part of that action could be at a lower risk. But the specific level of participation where increased risk of personal liability kicks in for tokenholders has not yet been settled.

Finally, the moderate clarity we now have on liability can guide a decentralized finance entity in deciding how to structure. A DAO may make a philosophical compromise on absolute decentralization by forming within an LLC wrapper and thereby trading some centralization of organizational decision-making authority in exchange for greater liability protections/limitations.

In addition to limiting personal liability, the wrapper may address issues around how DAOs can take actions, enabling DAOs to sign contracts, hire employees, pay taxes, and take other practical steps. Finally, an LLC wrapper may also provide some level of protection against future regulation in the decentralized finance space, which is very much a moving target.

For now, these structures will operate in the shadow of significant legal uncertainty. Approaching DAOs with caution and limiting personal liability through more traditional structures, and active close engagement with legal counsel, are the most prudent steps until courts provide additional guidance.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Andrew Gilbert, partner at Croke Fairchild Duarte & Beres, focuses his practice on private M&A transactions, venture capital, and business counseling.

Michael Frisch, partner at Croke Fairchild Duarte & Beres, counsels clients on regulatory compliance, investigations, and enforcement matters involving digital assets

Rob Isham, partner at Croke Fairchild Duarte & Beres, represents major financial institutions and borrowers in a range of credit facilities.

Write for Us: Author Guidelines

Here is the original post:

How DAOs Factor Into Litigation, Enforcement, and Restructuring - Bloomberg Law

Read More..