Page 1,184«..1020..1,1831,1841,1851,186..1,1901,200..»

Wimbledon to use artificial intelligence-powered commentary for upcoming tournament… – talkSPORT

Wimbledon is set to introduce artificial intelligence-powered commentary for coverage of this years tournament.

The competition, which takes place between July 3 and July 16, will use cutting edge new technology to broadcast the biggest matches from SW19.

Getty

GETTY

The All England Club has teamed up with technology giants IBM to offer AI-generated audio commentary and captions in its online highlights videos.

It will use IBMs watsonx AI platform, which has been trained in the unique language of tennis with the help of the All England Club, as well as including AI-powered analysis of singles draws, examining how favourable a players path to the final might be.

The service will be available on the official Wimbledon app and website and will be separate to the BBCs coverage of the event, with the television broadcasters using human commentators.

Kevin Farrar, Head of IBMs Sports Partnerships, said: We are using a generated voice. Its not a real voice. Its not based on a specific person.

The commentary is being generated from the stats, forehand, backhand, etc.

We hadtennisspecialists on the team, so we drew on them in terms of the language it will use. Its not based on an individual and their style.

You can see in the future that you could train it in different styles, languages, voices. So this is a step on that journey.

I see AI as very much complimenting the human element.

Getty

You cant replaceJohn McEnroedoing commentary, that human element always needs to be there.

Its very much about supplementing and complementing.

ForWimbledon, its about providing commentary in the future on matches that dont currently have human commentary the seniors, juniors, wheelchair matches.

So for all instances its about complimenting the human element rather than replacing.

The rest is here:
Wimbledon to use artificial intelligence-powered commentary for upcoming tournament... - talkSPORT

Read More..

The five most advanced robots with Artificial Intelligence – Telefnica

The advent of further innovations is driving the development of a new generation of robots, with skills that are constantly increasing. Although robots dont yet have the same capabilities and skills as human beings, there are now some examples of artificially intelligent robots that exemplify the breakthroughs in this area.

Smart robots have AI algorithms integrated into them, enabling them to work on their own after a previous phase of training or automatic learning. This technology is also known as machine learning, thanks to which the robot learns and reacts and is able to reason mechanically.

Although, generally speaking, robots dont have artificial intelligence, the robotics industry is gradually incorporating certain AI processes into them to increase their autonomy.

In 2016, engineer David Hanson developed the first robot with a brain made up of AI algorithms called Shopia. Thanks to this technology, a robot can have conversations with people and generate facial expressions.

Shopias brain is segmented into three configurations. Firstly, it has a platform to search for data and a system capable of processing them. It also has a speech system thats required to announce all the processed data. Finally, it incorporates a tool that analyses the environment, in order to adapt to the place where it works and issue a response appropriate to the processed context.

Shopia was created with the aim of helping specific sectors such as science, medicine and education. However, its current function is more recreational, as its used for talks and entertainment.

The advent of Shopia gave way to numerous robotic creations with AI, including:

AMECA

Developed by the Engineered Arts robotics company. This intelligent robot can move, converse and even express a wide range of emotions, ranging from happiness to anger. One of its most significant features is that it can make very realistic and emotionally charged facial gestures.

Its software is called Tritium and its made up of Artificial Intelligence algorithms capable of identifying behavioural patterns and making predictions. Although its designed with artificial limbs connected to the latest technology, its lower half is currently non-functional. For this reason it cant move autonomously.

Spot is a dog-like intelligent robot designed by the US Boston Dynamics company. After years of research, the company put it on sale in June 2020 in the USA for 74, 500 dollars.

Spots brain is made up of AI algorithms and its capable of working autonomously, detecting and even predicting problems and resolving highly complex questions. Its programmed to communicate seamlessly by using natural language. Thanks to the different technological integrations that it incorporates, it can analyse large amounts of information, thus providing additional security against accidents in logistical contexts.

Atlas, also created by Boston Dynamics, is regarded as the worlds most agile and coordinated intelligent robot. Its software gives it full control over its legs and arms, so it can run and jump over surfaces, a task impossible for humans.

It has four electronic limbs made of aluminium and titanium with which it can pick up and throw heavy objects. It can also dance, detect complex spaces, grasp things and keep its balance while carrying objects.

AInstein is an educational project created by secondary school students in Cyprus, with the aid of ChatGPT artificial intelligence. Its main aim is to improve the educational experience in classrooms, as it can interact with the students and teachers, tell jokes, try to speak different languages and offer advice on how to approach the teaching.

The students who worked on the development of AInstein decided to give it a very particular appearance, as it resembles the famous Michelin Man and its screen simulates a human face. These students have learnt to interact with technology in a collaborative manner and demonstrated that the arrival of intelligent robots in the classroom can bring important educational benefits.

PALM-E Google has developed this tool with AI programmed upon the basis of multimodal visual language. Its chiefly characterised by over 562,000 million parameters that integrate vision and automated language and it has an automated arm enabling it to generate action plans, a feature that sets it apart from the others.

The technological companys developers claim that PALM-E has advanced reasoning capabilities, giving it the ability to analyse the sequences of visual and audio information that it receives. In this respect, it makes predictions and gradually learns more and more about the different tasks assigned to it. Googles goal is to incorporate this intelligent robot into everyday contexts, including the industrial sector and home automation.

See more here:
The five most advanced robots with Artificial Intelligence - Telefnica

Read More..

Regulating Generative Artificial Intelligence: Balancing Innovation … – Lexology

Introduction

In a matter of months, generative artificial intelligence (AI) has been adopted ravenously by the public, thanks to programs like ChatGPT. The increasing use (or proposed use) of generative AI by organizations has presented a unique challenge for regulators and governments across the globe. The balance between fostering innovation while mitigating risks associated with the technology is a challenge that different lawmakers are trying to strike. This article summarizes some of the key legislation or proposed legislation around the world that tries to strike that balance.

AI Regulation in Canada

1.Current Law

While Canada does not have an AI-specific law yet, Canadian lawmakers have taken steps to address the use of AI in the context of so-called automated decision-making. Qubecs private sector law, as amended by Bill 64 (the Qubec Privacy Law), is the first piece of legislation in Canada to explicitly regulate automated decision-making. The Qubec Privacy Law imposes a duty on organizations to inform individuals when a decision is based exclusively on automated decision-making.

Interestingly, this duty to inform individuals about automated decision-making is also found in Bill C-27, the federal bill to overhaul the federal private sector legislation. Bill C-27 imposes obligations on organizations around automated decision systems. Organizations that use personal information to inform their automated decision systems to make predictions about individuals are required to:

In addition to the privacy reforms, the third and final part of Bill C-27 introduced Canadas first every AI-specific legislation, which is discussed in the next section.

On June 16, 2022, Canadas Minister of Innovation, Science and Industry (Minister) tabled the Artificial Intelligence and Data Act (AIDA), Canadas first attempt to formally regulate certain artificial intelligence systems as part of the sweeping privacy reforms introduced by Bill C-27.

Under AIDA, a person (which includes a trust, a joint venture, a partnership, an unincorporated association, and any other legal entity) who is responsible for an AI system must assess whether an AI system is a high-impact system. Any person who is responsible for a high-impact system then, in accordance with (future) regulations, must:

It should be noted that harm under AIDA means physical or psychological harm to an individual, damage to an individuals property, or economic loss to an individual.

If the Minister has reasonable grounds to believe that the use of a high-impact system by an organization or individual could result in harm or biased output, the Minister has a variety of remedies at their disposal.

You can read more about AIDA here.

Key AI Regulation, Frameworks, or Guidance Across the Globe

As of the date of the writing of this article, on an international scale, AI-specific laws are few and far between. AI regulation in most countries simply derives from already existing privacy and technology laws that do not explicitly address AI or automated decision-making. Nevertheless, some counties have made notable progress in addressing the dawn of AI. For example, on June 14, 2023, the European Union (EU) based the AI Act, becoming the world's first comprehensive AI law.

The EUs new AI Act establishes obligations for providers and users depending on the level of risk from AI. It will be interesting to see whether countries will adopt a similar risk-based approach as they develop their own AI laws.

The following chart is a summary of the progress various countries have made in developing AI-specific legislation:

Evidently, the EU is leading the pack while China and Brazil follow closely behind. The regulation of generative AI in so many of these documents shows the increasing alertness towards AI-driven tools such as ChatGPT.

Interestingly, while potential legislation addressing AI is developing slowly in the United States, some states have already gone ahead and drafted their own state-specific regulations. In California, for example, Bill AB 331 will amend their current Business and Professions Code to require impact assessments for automated decision tools and require certain obligations in accordance with the results of those assessments.[9] Individual state efforts such as Californias, show a growing recognition as to just how dire the need is to regulate this technology.

Takeaways

On a global scale, awareness of the risks associated with AI and generative models such as ChatGPT is evidently increasing. The inherent complexity and unpredictability of AI and its corresponding tools and models make regulating its use an ongoing challenge. Finding the perfect balance between allowing AIs benefits to thrive, such as in medicine with early detection and diagnosis of diseases, with combatting AIs risks, such as bias and discrimination, remains ambiguous.

While AIDA has yet to be made into official law in Canada, businesses who are using (or are planning to use) AI and its various tools and models should be prepared to comply with the upcoming AI laws. Here are some recommendations that organizations should adopt to get ahead of the upcoming AI laws, such as AIDA:

The core of any AI compliance framework should be the incorporation of privacy-by-design and ethics-by-design concepts into the framework. This means that data protection and ethical features will be integrated into the organizations system of engineering, practices and procedures. These features will likely allow an organization to adapt to changing technology and regulations.

Read more from the original source:
Regulating Generative Artificial Intelligence: Balancing Innovation ... - Lexology

Read More..

Artificial intelligence set to personalise treatment of heart failure … – European Society of Cardiology

Notes to editor

ESC Press OfficeTel: +33 (0)489 872 075Email: press@escardio.org

Follow us on Twitter @ESCardioNews

Funding:AI4HF has received 5,910,451.25 from the European Health and Digital Executive Agency (HaDEA).

Disclosures: None.

References and notes

1Savarese G, Becher PM, Lund LH, et al. Global burden of heart failure: a comprehensive and updated review of epidemiology. Cardiovasc Res. 2022;118:32723287.

2Azad N, Lemay G. Management of chronic heart failure in the older population. J Geriatr Cardiol. 2014;11:329-337.

3Koudstaal S, Pujades-Rodriguez M, Denaxas S, et al. Prognostic burden of heart failure recorded in primary care, acute hospital admissions, or both: a population-based linked electronic health record cohort study in 2.1 million people. Eur J Heart Fail. 2017;19:1119-1127.

4Benjamin EJ, Blaha MJ, Chiuve SE, et al. Heart disease and stroke statistics-2017 update: A report from the American Heart Association. Circulation. 2017;135:e146-e603.

5Partners

1. Netherlands Heart Institute, the Netherlands

2. University of Barcelona, Spain

3. Centre for Research & Technology, Greece

4. Barcelona Supercomputing Centre, Spain

5. Software Research & Development Corporation, Turkey

6. Vall dHebron Hospital Research Institute, Spain

7. International Clinical Research Centre, Czechia

8. Muhimbili University of Health and Allied Sciences, Tanzania

9. Centre for Biomedical and Environmental Technology Research, Peru

10. SHINE 2Europe, Portugal

11. Regenold GmbH, Germany

12. European Society of Cardiology, France

13. European Heart Network, Belgium

14. University Medical Centre Utrecht, the Netherlands

15. Amsterdam University Medical Centre, the Netherlands

Associated partner

16. University of Oxford, UK

About the European Society of Cardiology

The ESC brings together health care professionals from more than 150 countries, working to advance cardiovascular medicine and help people to live longer, healthier lives.

Read more from the original source:
Artificial intelligence set to personalise treatment of heart failure ... - European Society of Cardiology

Read More..

Red Cat and Athena AI announce breakthrough artificial intelligence and computer-vision capabilities for Teal 2 military-grade drone – Yahoo Finance

Red Cat Holdings, Inc.

AthenasAItechnology has successfully performed target recognition and battle trackingfor a nighttimeflight of Red Cats Teal 2

Athena AI computer vision technology

Athena AIs computer-vision technology successfully identified human and vehicle targets using drone video recorded during a nighttime test flight of Red Cats Teal 2 military-grade sUAS.

SAN JUAN, Puerto Rico, June 20, 2023 (GLOBE NEWSWIRE) -- Red Cat Holdings, Inc.(Nasdaq: RCAT) (Red Cat or the Company), a military technology company integrating robotic hardware and software to protect and support the warfighter,today announcesit has completed thesecond phaseof its artificialintelligence and computer-visionpartnership withAthena AI.

Athena was firstannouncedasapartner for Red CatsTeal 2military-grade drone in March.Now,byprocessingvideothat the Teal 2s thermal-imagingsensor recorded during a nighttime test flight,Athenas technology hassuccessfully performedtarget recognition and battle tracking. This capability allows commanders fast decision-making on the battlefield with artificial intelligence assistance.

Nighttime computer-vision capability is a Teal 2 add-on we support for users who need high-value data at night, said George Matus,founderand CEO of Red Cat subsidiaryTeal Drones.The images and insights that Athenas technology deliver are outstanding.Athenas battle-tracking capabilities and artificial intelligence,combinedwith Teals best-in-class drone,give warfighterstheunfair advantage.

Australia-based Athena, an AI-enabled military decision-support company, has licensed to Red Cat its proprietarycomputer-vision architecture, which allows high-speed tracking of objects and, at slower speeds, in-depth data exploitation. Athenas solution canidentifyweapons,humansand othertargetsat night, as well as Identification Friend or Foe (IFF) markers, such asCyalumeHALOs and IR beacons.

Story continues

Unlike a lot of other drones in the sUAS quad space thatarentMISB-compliant,the Teal 2sKLVmetadata unlocks thefull decision-suite support of Athena AI, saidAthena CEO Stephen Bornstein.Thiscombination of a nighttimesUAS with live-vehicle metadata allows for real-time situational awareness to support battle tracking,common operationalpicture(COP)at higher echelons of command,andaccuratetargeting.

Officiallylaunchedin April, the Teal 2 is designed toDominate the Nightand arrives as the worlds leading sUAS for night operations. The Teal 2 is the first sUAS to be equipped withTeledyne FLIR's new Hadron 640R sensor, providing end users with the highest resolution thermal imaging in a small form factor. The Teal 2 also offers the latest intelligence,surveillanceand reconnaissance (ISR) technology, delivering time-critical information and enabling operators to make faster, smarter decisions.The Teal 2 airframe has been designed as an open platform that can add software features such as Athena AI, and those combined products improve Red Cats gross margins.

Red Cat willexhibitthe Teal2 at theModern DayMarineexpo in Washington, D.C., from June 27-29.

To view a spec sheet forthe Teal2, clickhere.To watch a short video aboutthe Teal2, clickhere.

About Red Cat Holdings, Inc.Red Cat (Nasdaq: RCAT) is a military technology company that integrates robotic hardware and software toprovidecritical situational awareness and actionable intelligence to on-the-ground warfighters and battlefield commanders. Its mission is to enhance the effectiveness and safety of military operations domestically and globally and to Dominate the Night. Red Cats suite of solutions includes Teal Drones, developer of the Teal 2, a small unmanned system with the highest resolution imaging for nighttime operations, andSkypersonic, a leading provider of unmannedaircraftfor interior spaces and other dangerous environments. Learn more athttps://www.redcatholdings.com.

Forward-Looking Statements This press releasecontains"forward-looking statements" that are subject to substantial risks and uncertainties. All statements, other than statements of historical fact, contained in this press release are forward-looking statements. Forward-looking statements contained in this press release may be identified by the use of words such as "anticipate," "believe," "contemplate," "could," "estimate," "expect," "intend," "seek," "may," "might," "plan," "potential," "predict," "project," "target," "aim," "should," "will," "would," or the negative of these words or other similar expressions, although not all forward-looking statements contain these words. Forward-looking statements are based on Red Cat Holdings, Inc.'s current expectations and are subject to inherent uncertainties, risks and assumptions that are difficult to predict. Further, certain forward-looking statements are based on assumptions as to future events that may not prove to beaccurate. These and other risks and uncertainties are described more fully in the section titled "Risk Factors" in the final prospectus related to the public offering filed with the Securities and Exchange Commission. Forward-looking statements contained in this announcement are made as of this date, and Red Cat Holdings, Inc. undertakes no duty to update such information except as required under applicable law.

Contacts NEWS MEDIA: AnthonyPriwer Dalton AgencyPhone: (615) 515-4891 Email:apriwer@daltonagency.com

INVESTORS: CORE IR Phone: (516) 222-2560Email:investors@redcat.redWebsite:https://www.redcatholdings.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/ef35bdb2-26ed-4e38-9fb3-51c6cace7411

Visit link:
Red Cat and Athena AI announce breakthrough artificial intelligence and computer-vision capabilities for Teal 2 military-grade drone - Yahoo Finance

Read More..

The impact of artificial intelligence on growth and employment – CEPR

At the 2023 World Economic Forum, tech entrepreneur Mihir Shukla noted: People keep saying AI is coming but it is already here. The use of artificial intelligence (AI) for day-to-day tasks has increased rapidly over the last decade and ChatGPT (developed by OpenAI) is a prime example of this, with the popular generative AI used by more than a billion users for everyday tasks like coding and writing. The speed and scale of AI uptake can be captured by a simple fact: it took ChatGPT just 60 days to reach its 100 millionth user; in contrast, Instagram took two years to reach the same milestone. A recent Stanford University report found that the number of AI patents increased 30-fold between 2015 and 2021 (HAI 2023), highlighting the rapid rate of progress made in the AI development sphere. AI-powered technologies can now perform a range of tasks, including retrieving information, coordinating logistics, providing financial services, translating complex documents, writing business reports, preparing legal briefs, and even diagnosing diseases. Moreover, they are likely to improve the efficiency and accuracy of these tasks due to their ability to learn and improve via the use of machine learning (ML).

AI is generally acknowledged to be an engine of productivity and growth. With its ability to process and analyse enormous volumes of data, it has the potential to boost the efficiency of business operations. The McKinsey Global Institute predicts that around 70% of companies will adopt at least one type of AI technology by 2030, and less than half of large companies may use the full range of AI technologies. Price Waterhouse Coopers predicts that AI could increase global GDP by 14% in 2030 (PwC 2017).

Research into the impact of AI on the labour market has expanded recently. Acemoglu and Restrepo (2018) provide a theoretical framework to understand the impact of new technologies on the labour market. They decompose the effect of new technologies on labour into three broad effects: a displacement effect, a productivity effect and a reinstatement effect (new technologies can serve as a platform to create new tasks in many service industries, where labour has a comparative advantage relative to machines, boosting labour demand).

Frank et al. (2019) classify current literature on the labour market implications of AI into two broad categories: a doomsayers perspective and an optimists perspective. Doomsayers believe that labour substitution by AI will harm employment. Frey and Osborne (2013) estimate that 47% of total US employment is at risk of losing jobs to automation over the next decade. Their research reveals that a substantial share of employment in service occupations where most US job growth has occurred over the past decades are highly susceptible to computerisation. Bowles (2014) uses Frey and Osbornes (2013) framework to estimate that 54% of EU jobs are at risk of computerisation. Acemoglu and Restrepo (2017) provide a historical example of excessive automation negatively affecting the labour market due to weak productivity and reinstatement effects, finding that areas in the US most exposed to industrial automation in the 1990s and 2000s experienced large and robust negative effects on employment and wages.

AI is also expected to have a disruptive effect on the composition of the labour market. Autor (2015) presented evidence that the labour market has become polarised over the last few decades towards low-skilled and high-skilled jobs and away from medium-skilled jobs, due to the advent of computers. However, he stated that this polarisation is likely to be reversed, as some low-and-medium skilled jobs are likely to be relatively resistant to automation, while some highly-skilled but relatively routine jobs may be automatable (potentially with technologies like AI). However, Petropoulos and Brekelmans (2020) concluded that unlike the computer and robotic revolution, the AI revolution is unlikely to cause job polarisation as it will affect alter low-skilled, middle-skilled and high-skilled jobs.

Optimists believe that AIs productivity and reinstatement effects will be more than enough to compensate for the substitution effect. Some opinion pieces project that AI and robotics will have created up to 90 million jobs by 2025, indicating a strong positive labour market impact. The World Economic Forum concluded in October 2020 that while AI would likely take away 85 million jobs globally by 2025, it would also generate 97 million new jobs in fields ranging from big data and machine learning to information security and digital marketing.

Lawrence et al. (2017) argue that AI automation is unlikely to negatively impact the employment market due to its large positive spillover effects (reinstatement effect), which would counteract the negative direct effects of substitution in the labour market and can be seen as a Schumpeterian creative destruction. They believe that automation is likely to transform, rather than eliminate, work. In contrast to other studies finding larger negative effects, Arntz et al. (2016) estimate that only 9% of jobs in the UK are susceptible to automation in the next decade. They argue that instead of substitution, transformation is more likely to occur, with 35% of jobs would change radically in the next two decades.

Nakamura and Zeira (2018) build a task-based theoretical model that shows that automation need not lead to unemployment in the long run. Somers et al. (2022) conduct a systematic review of the empirical literature on technological change and its impact on employment and find that the number of studies that support the labour substitution effect is more than offset by the number of studies that support the labour-creating/reinstating and real income effects of new technologies. Moreover, they find that studies that analyse the net employment effect of technological change suggest the net impact of technology on labour to be rather positive than negative, reaffirming this narrative. Bholat (2020) further notes that job losses in specific sectors due to new technologies have historically been counter-balanced by broad-based gains in aggregate real income as these technologies create higher quality and lower priced goods and services. This leads to higher disposable income which boosts demand for new products, which in turn, boosts labour demand in such sectors. Alan Manning notes that some of the direst predictions about the impact of automation on employment during the past decade have not come to pass (Bholat 2020). This may indicate that concerns about the impact of AI on employment are slightly exaggerated.

The May 2023 CfM-CEPR survey asked the members of its panel to forecast the impact of AI on global economic growth and unemployment rates in high-income countries over the upcoming decade. The survey contained two questions. The first asked the panellists to forecast the impact of AI on global economic growth over the upcoming decade. The second asked them to predict the impact of AI on unemployment in high-income countries in the upcoming decade.

Twenty-seven panel members responded to this question. The majority of the panel (64%) believes that AI will increase global economic growth to 46% per annum over the upcoming decade. The remainder of the panel (36%) thinks that AI will have no significant effect on global growth. Notably, most panellists express a great degree of uncertainty with their predictions because

Almost two-thirds of the panel believes that the development of AI over the upcoming decade will positively impact economic growth. Jorge Miguel Bravo (Nova School of Business and Economics, Lisbon) cites the widespread uptake of machine learning as a general purpose technology across the developed and developing world as the main reason why he would bet on the upside rather than no change or fall. Ugo Panizza (The Graduate Institute, Geneva (HEID)) echoes these thoughts, claiming that AI will lead to an increase in productivity and thus higher economic growth. However, he notes some caveats to this prediction, stating that AI could increase unemployment and inequality and this may have backlashes on productivity and growth. Robert Kollmann (Universit Libre de Bruxelles) similarly expresses subdued optimism regarding the advent of AI, arguing that it is unlikely to affect the long-term trend growth rate of world GDP, but could boost global growth slightly, by around 0.5.

Most panellists believe that the implications of AI on global economic growth are extremely uncertain, rendering forecasting impossible. Andrea Ferrero (University of Oxford) summarises this view: I expect AI to have a significant impact on the economy, but I'm not really sure how. I can imagine that some sectors will benefit more than others, and some may even suffer. The implications for economic growth overall are highly uncertain in my view. Jagjit Chadha (National Institute of Economic and Social Research) states that the overall impact of AI will depend on several factors which policies are adopted, how monopoly power is challenged and what new ideas are ultimately released and hence, he cannot say what the impact will be with any degree of certainty.

Ricardo Reis (London School of Economics) succinctly summarises the great degree of uncertainty shared by the panel: Forecasting future growth over a decade is very hard, so not confident is the most relevant part of this answer.

Twenty-nine panel members responded to this question. Most of the panel (63%) believes that AI will not affect employment rates in high-income countries across the next decade. The majority of the remainder (27%) think that developments in AI could increase unemployment in high-income countries. Only two panellists believe that AI could decrease unemployment in high-income countries over the upcoming decade. Notably, more than half of the panel express a lack of confidence in their responses, indicating the high degree of uncertainty surrounding this question.

Most panellists believe that AI developments are unlikely to impact unemployment in high-income countries over the long run. Michael Wickens (Cardiff Business School and University of York) cites previous technological changes to support this stance: The number of jobs and the level of unemployment will not change but the number of hours of work will fall and leisure increase. This is what has happened following past tech[nological] improvements. The same thing will happen with AI. Echoing this view, Cdric Tille (The Graduate Institute, Geneva) states that unemployment effects may be limited but notes that "the impact on income inequality and need for redistribution policy may be large. Maria Demertzis (Bruegel) argues that the impact of unemployment could depend on reskilling, stating the quicker this [reskilling] happens, the less the impact on unemployment.

Several panellists indicate uncertainty regarding long-term predictions of the impact of AI on employment. Andrea Ferrero (University of Oxford) highlights this uncertainty: I expect unemployment to increase in the short run as human employment in some tasks and jobs becomes obsolete. In the medium run, the supply will adjust and it's possible that AI may even reduce the natural rate of unemployment in the long run. I'm just not sure.

A fraction of the panel expresses pessimism regarding the impact of AI on the labour market due to its impact on vulnerable groups. Wouter den Haan (London School of Economics) sums up this viewpoint: My concern is that AI may be bad for the more vulnerable in the labour markets like the ones who will not be that easy to adapt to a new environment.

However, a few panel members express an opposing view, claiming that may actually decrease unemployment in high-income countries. Volker Wieland (Goethe University Frankfurt and IMFS) suggests that AI could potentially decrease unemployment rates and the number of hours worked.

Acemoglu, D and P Restrepo (2017), Robots and Jobs: Evidence from US Labor Markets, SSRN Electronic Journal 128(6).

Acemoglu, D and P Restrepo (2018), Modeling Automation, SSRN Electronic Journal.

Agrawal, A, J Gans, and A Goldfarb (eds) (2019) The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.

Arntz, M, T Gregory and U Zierahn (2016), The Risk of Automation for Jobs in OECD Countries, OECD Social, Employment and Migration Working Paper No. 189.

Autor, D H (2015), Why Are There Still So Many Jobs? The History and Future of Workplace Automation, Journal of Economic Perspectives 29(3): 330.

Bholat, D (2020), The impact of machine learning and AI on the UK economy, VoxEU.org, 2 July.

Bowles, J (2014), Chart of the Week: 54% of EU jobs at risk of computerisation, Bruegel blog.

Frank, M R, D Autor, J E Bessen et al. (2019), Toward understanding the impact of artificial intelligence on labor, Proceedings of the National Academy of Sciences 116(14): 65316539.

Frey, C B and M A Osborne (2013), The Future of employment: How Susceptible Are Jobs to computerisation?, Technological Forecasting and Social Change 114(1): 254280.

HAI Institute for Human Centered AI (2023), Artificial Intelligence Index Report 2023, Stanford University.

Lawrence, M, C Roberts and L King (2017), IPPR Commission on Economic Justice Managing Automation Employment, inequality and ethics in the digital age, IPPR Discussion Paper.

Nakamura, H and J Zeira (2018), Automation and unemployment: Help is on the way, VoxEU.org, 11 December.

Petropoulos, G and S Brekelmans (2020), Artificial intelligences great impact on low and middle-skilled jobs, Bruegel blog.

PwC (2017), The macroeconomic impact of artificial intelligence.

Somers, M, A Theodorakopoulos and K Htte (2022), The fear of technology-driven unemployment and its empirical base, VoxEU.org, 10 June.

Read more from the original source:
The impact of artificial intelligence on growth and employment - CEPR

Read More..

Generative Artificial Intelligence Emerges as a Globally Disruptive … – PR Newswire

DUBLIN, June 21, 2023 /PRNewswire/ -- The"Generative Artificial Intelligence Emerges as a Globally Disruptive Technology" report has been added to ResearchAndMarkets.com's offering.

Enterprise Generative AI will be critical for vendors and service providers to assess the potential application opportunities for adoption use cases and establish strategic partners to develop strong value propositions and accelerate go-to-market strategies.

Generative artificial intelligence (AI) applications have captured widespread attention from enterprise and consumer segments. This technology marks an inflection point in AI adoption, with the potential to transform business models, company functions, and job roles.

While enterprise adoption is still nascent, organizations are exploring the potential impact and seeking to understand the steps to build adoption readiness. In addition, Generative AI comes with legal and ethical concerns that must be considered and planned for.

Enterprises must encourage a culture of experimentation and simultaneously follow a structured approach to prioritize use cases and accelerate implementation.

The technology's significance and disruptive potential create new growth opportunities across the information and communication technologies (ICT) ecosystem for digital infrastructure applications, boosting algorithm training and application deployments.

Key Topics Covered:

1. Strategic Imperatives

2. Technology Overview

3. Enterprise Generative AI

4. Generative AI Ecosystem

5. Growth Opportunity Analysis

6. Growth Opportunity Universe

For more information about this report visit https://www.researchandmarkets.com/r/eu6x0c

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Media Contact:

Research and MarketsLaura Wood, Senior Manager[emailprotected]For E.S.T Office Hours Call +1-917-300-0470For U.S./CAN Toll Free Call +1-800-526-8630For GMT Office Hours Call +353-1-416-8900U.S. Fax: 646-607-1907Fax (outside U.S.): +353-1-481-1716

Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg

SOURCE Research and Markets

Go here to see the original:
Generative Artificial Intelligence Emerges as a Globally Disruptive ... - PR Newswire

Read More..

The workers already replaced by artificial intelligence – BBC

16 June 2023

Image source, Dean Meadowcroft

Dean Meadowcroft never thought that AI would replace him

Until recently Dean Meadowcroft was a copywriter in a small marketing department.

His duties included writing press releases, social media posts and other content for his company.

But then, late last year, his firm introduced an Artificial Intelligence (AI) system.

"At the time the idea was that it would be working alongside human lead copywriters to help speed up the process, essentially streamline things a little bit more," he says.

Mr Meadowcroft was not particularly impressed with the AI's work.

"It just kind of made everybody sound middle of the road, on the fence, and exactly the same, and therefore nobody really stands out."

The content also had to be checked by human staff to make sure it had not been lifted from anywhere else.

But the AI was fast. What might take a human copywriter between 60 and 90 minutes to write, the AI could do in 10 minutes or less.

Around four months after the AI was introduced, Mr Meadowcroft's four-strong team was laid off.

Mr Meadowcroft can't be certain, but he's pretty sure the AI replaced them.

"I did laugh-off the idea of AI replacing writers, or affecting my job, until it did," he said.

The latest wave of AI hit late last year when OpenAI launched ChatGPT.

Backed by Microsoft, ChatGPT can give human-like responses to questions and can, in minutes, generate essays, speeches, even recipes.

While not perfect, such systems are trained on the ocean of data available on the internet - an amount of information impossible for even a team of humans to digest.

So that's left many wondering which jobs might be at risk.

Any job losses would not fall equally across the economy. According to the report, 46% of tasks in administrative and 44% in legal professions could be automated, but only 6% in construction and 4% in maintenance.

The report also points out that the introduction of AI could boost productivity and growth and might create new jobs.

There is some evidence of that already.

Image source, Getty Images

IKEA has retrained thousands of call centre workers as design advisers

The furniture giant says that 47% of customer calls are now handled by an AI called Billie.

While IKEA does not see any job losses resulting from its use of AI, such developments are making many people worried.

A recent survey by Boston Consulting Group (BCG), which polled 12,000 workers from around the world, found that a third were worried that they would be replaced at work by AI, with frontline staff more concerned than managers.

Jessica Apotheker from BCG says that's partly due to fear of the unknown.

"When you look at leaders and managers, we have more than 80% of them that use AI at least on a weekly basis. When you look at frontline staff, that number drops to 20% so with the lack of familiarity with the tech comes much more anxiety and concern on the outcomes for them."

But perhaps there is good reason to be anxious.

Image source, Alejandro Graue

Alejandro Graue lost voiceover work to an AI system

For three months last year, Alejandro Graue had been doing voiceover work for a popular YouTube channel.

It seemed to be a promising line of work, a whole YouTube channel in English had to be re-voiced in Spanish.

Mr Graue went on holiday late last year confident that there would be work when he returned.

"I was expecting to have that money to live with - I have two daughters, so I need the money," he says.

But to his surprise, before he returned to work, the YouTube channel uploaded a new video in Spanish - one he had not worked on.

"When I clicked on it, what I heard was not my voice, but an AI generated voice - a very badly synced voiceover. It was terrible. And I was like, What is this? Is this like going to be my new partner in crime like the channel? Or is this going to replace me?" he says.

A phone call to the studio he worked for confirmed the worst. The client wanted to experiment with AI because it was cheaper and faster.

That experiment turned out to be a failure. Viewers complained about the quality of the voiceover and eventually the channel took down the videos that featured the AI-generated voice.

But Mr Graue did not find that very comforting. He thinks the technology will only improve and wonders where that will leave voiceover artists like him.

"If this starts to happen in every job that I have, what should I do? Should I buy a farm? I don't know. What other job could I look for that is not going to be replaced as well in the future? It's very complicated," he says.

If AI is not coming for your job then it is likely you might have to start working with one in some way.

After a few months of freelance work, former copywriter Dean Meadowcroft took a new direction.

He now works for an employee assistance provider, which gives wellbeing and mental health advice to staff. Working with AI is now part of his job.

"I think that is where the future is for AI, giving quick access to human-led content, as opposed to completely eliminating that human aspect," he says.

You can see the full interviews with Dean Meadowcroft and Alejandro Graue on Talking Business with Aaron Heslehurst on BBC News.

If you work with Artificial Intelligence, how is it changing how you do your job? You can share your experience by emailing haveyoursay@bbc.co.uk.

Please include a contact number if you are willing to speak to a BBC journalist. You can also get in touch in the following ways:

If you are reading this page and can't see the form you will need to visit the mobile version of the BBC website to submit your question or comment or you can email us at HaveYourSay@bbc.co.uk. Please include your name, age and location with any submission.

Read the rest here:
The workers already replaced by artificial intelligence - BBC

Read More..

Opinion | Artificial Intelligence Whither India? – News18

Once the preserve of lawyers, computer programmers or specialists in digitalisation, Artificial Intelligence (AI) and questions surrounding its future governance, in recent weeks, have risen to the forefront of the international political agenda. AI is, evidently, a transformative technology with great promise for our societies, offering unparalleled opportunities to increase prosperity and equity. But along with its growing pervasiveness, concerns are growing about its potential risks and threats, and this dawning realisation brings in its train reflections on whether, and if so, how to regulate it in the public interest.

Any meaningful discussion of AI and the scope for its regulation has to begin, of course, with finding a common understanding of the term. The Organisation for Economic Co-operation and Development (OECD), an intergovernmental organisation, has established the first principlesonAIwhich reflect its own founding principles based on trust, equality, democratic values and human rights. It defines AI as a machine-based system capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives."

But the OECD is not alone in seeking to define AI. We are currently witnessing the emergence of numerous and at least potentially competing definitions of AI, a technology that is developing and morphing so fast that it may defy a static definition, which of course poses problems for regulators but also for businesses. Some of the definitions are technologically neutral, while others are what one might describe as more value-based.

In all likelihood, value-based definitions are going to remain more fit for purpose since they rely less on the current state of technological development of AI but instead introduce the political and ethical framework which is now driving the concerns of governments and regulators. The international community is increasingly questioning the role of AI in our society not only from a technological and economic perspective but also from a moral one. Is AI the Good Angel of fairy tales? Or is it as some would have it Frankensteins Monster? In short, the global community is becoming more aware of the risks attached to AI and not just the opportunities it presents. Some salutary illustrations of this:

First, an open letter from a large group of eminent scientists this March calling for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening no one - not even their creators - can understand, predict, or reliably control," the letter read.

This quite drastic warning sound was more recently picked up by Professor Geoffrey Hinton, the man widely credited as the Godfather of AI." Hinton, in an interview in the New York Times in early June, referred to the risk that the race between Microsoft and Google would push forward the development of AI without appropriate guardrails in place. He said, We need to find a way to control artificial intelligence before its too late. The world shouldnt wait for AI to be smarter than humans before taking control of it as it grows." Even the rap artist Snoop Dogg, confronted with an AI version of himself, is wondering: Are we in a movie? I got AI right now that they made for me. The AI could talk to me! Im like, man, this thing can hold a real conversation? Like for real for real?"

So, considering the current state of AI today, we need to assess to what extent we are protected against AIs negative potential. And if we judge that the current levels of protection are insufficient, we need to identify solutions that will maximise AIs positive and innovatory contribution to social progress while safeguarding against its abuse or nefarious impacts.

AI is not completely new. It has been around for 50 years and is integrated into more applications and systems than one might think, but we are now witnessing a proliferation of AI technology at a speed which is overtaking the capacity of regulators to regulate.

To regulate or not regulate? That is the question.

If Hamlet had been with us today, this would surely have been his existential question. Some governments, companies and commentators argue that there is no need for regulation of AI by hard law with hard sanctions but that it is better to take a more flexible approach by means of encouraging guidance and compliance. Building upon this laisser-faire approach, others argue that we can rely on the ability of AI to regulate itself, effectively to provide the technical solutions to a whole range of challenges and risks including those surrounding security, privacy and human oversight. And at the other end of the regulatory spectrum are those who call for the application of the precautionary principle, which I supported in my own doctoral thesis some years ago about the protection of consumers online, namely to regulate even before the risks are identified or clearly emerge. To close the stable door as it were before the horse even enters it!

The positive news is that the IT, legal community and governments are engaging in good faith with one another in an effort to establish common ground and solidarity regarding the potential future governance of AI, starting with this basic question: Do we need to regulate AI by hard regulation, or by voluntary regulation, at the national or international level?"

In my view, it would be dangerous to simply reject the notion of regulation. Governments have a responsibility to at the very least consider the dangers the technology poses in its current stage as well as in the future; and arguably take a proactive approach rather than risk waiting for AI to cause harm before regulating it. We should learn from the experience at the turn of the 20th century when governments only started regulating motor cars only after they caused accidents and killed people.

The borderless nature of AI adds a layer of complexity to attempts to determine the need to regulate and the scope for doing so. In the past, national governments would generally be able to counteract national market failures by giving consumers protective legislation at the national level. Today, this situation has changed. The fundamental legitimacy of authority has always been based on physical presence and territorially-empowered courts, but the global nature of AI technology is addressing every people and every jurisdiction on the planet, and cyberspace itself exists in no physical place. Since every substantive area of human interaction is potentially impacted by AI, sources of law and regulatory interests are potentially all sources of law in every jurisdiction.

This situation calls for a new and more interconnected assessment of different national legal systems, one which recognises that AI functions as a single global system almost without boundaries and thus, if it is to be regulated at all, needs to be regulated globally.

If one accepts the need to regulate AI, one needs at the same time to avoid being overzealous since regulation is a blunt instrument and subject to legal interpretations, and sometimes to political interference. And regulation has the potential to stifle innovation and derail the potential benefits that AI can bring in diverse areas such as vehicle safety, improved productivity, health, computer programming, and much more. And since AI moves so quickly, regulation needs to be receptive to that.

When we think specifically about how we should regulate AI, we need to think each time about the context. It is difficult to regulate AI at a general-purpose level. We do not regulate electronics or computers in general. Instead, we have narrower regulatory regimes.

Earlier, we observed that governments face a choice of regulatory approaches: hard regulation, voluntary regulation, and national versus international level.

In determining whether, and if so, how to regulate governments need also to take into account and base their decisions on factors such as risk governance (risk assessment, management and communication), the science-policy interface, and the link between precaution and innovation.

Interestingly, we are seeing that an increasing number of jurisdictions are beginning to develop regulatory frameworks for AI focused on the value-based and human-centric approach that we touched upon earlier. It is constructive at this juncture to survey some of the principal regulatory approaches to AI, starting with the European Union which is arguably the most advanced jurisdiction as regards AI regulation.

The EU is a group of 27 countries that operate as a cohesive economic & political block. It has developed an internal single market through a standardised system of laws that apply in all member states in matters where members have agreed to act as one. At the time of writing, Europe is finalising draft legislation on AI, namely the EUs AI Act (AIA), which could go into effect by 2024, and which takes a risk-based approach to the regulation of artificial intelligence.

Together with the Digital Markets Act (DMA) and Digital Services Act (DSA), the EU has sought to develop a holistic approach to how authorities seek to govern the use of AI and information technology in society. The EU law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The AI Act categorises applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk, as follows:

Applications falling under this definition are banned. They include AI systems using subliminal manipulative or deceptive techniques to distort behaviour, including for example-

The high-risk areas stipulated in EU AI Act include AI systems that may harm peoples health, and human rights as well as the environment. The definition also includes AI systems that influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) in its high-risk list. When confronted with high-risk AI systems, developers will have to comply with stricter obligations in terms of risk management, data governance and technical documentation. The legislation goes on to cover the following parameters:

General-purpose AI - transparency measures

Supporting innovation and protecting citizens rights

The EU AI draft legislation takes pains not to stifle innovation, allowing exemptions for research purposes by promoting regulatory sandboxes, established by public authorities to test AI before its deployment. The draft EU AI Act is now being negotiated between the so-called co-legislators: European Council (the 27 EU Member States acting collectively) and the elected European Parliament.

India has made efforts to standardise responsible AI development. It is now about to consider regulating AI per se. According to the Indian government, AI has proven to be an enabler of the digital and innovation ecosystem, but its application has however given rise to ethical concerns and associated risks around issues such as bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm caused by it.

There is also a more prosaic but very real concern in India that AI and automation could put people out of work in many different fields such as manufacturing, transportation, and customer service. It can have an impact on the economy including the loss of jobs.

It is interesting to observe that India also has one of the highest numbers of ChatGPT users in the world. The chatbot by OpenAI leads to misinformation and violation of privacy by the collection of personal data without consent.

As India provides the technology extensively for governance as well as providing its services to the citizens, one can confidently predict that it will also prepare its own blueprint for artificial intelligence (AI) development. As noted earlier, in doing this, it will be essential to ensure that regulation is not so stringent that it hampers innovation or slows down the implementation of the technology.

As far as governance is concerned, there are already a number of extant AI strategy documents for India: Responsible AI" in February 2020, Operationalizing Principles for Responsible AI" in August 2021. There is also a National Strategy For Artificial Intelligence which covers sectors.

There is a strong likelihood that the AI framework in Europe will inspire India. It is notable that in the recent Digital India Act (DIA) consultation process, India signalled its intention to regulate high-risk AI systems. The EUs AI Act emphasises the need for openness and accountability in the development and deployment of AI systems, based on a human rights approach, and there is every sign that India will follow this ethical model.

As for the Indian Digital Personal Data Protection Bill 2022 (DPDPB 2022), it applies to AI developers who collect and use massive amounts of data to train their algorithm to enhance AI solutions. This implies that AI developers should comply with the key principles of privacy and data protection like purpose limitation, data minimisation, consensual processing, contextual integrity etc as enshrined in DPDPB 2022.

Having surveyed the regulatory climate at the national level in key jurisdictions, it is instructive to study the emergence of the concept of AI Digital Partnerships, and in particular, two of the growing number of international partnerships that have recently been established between the EU and, respectively, the US and India.

Through its recently formed digital partnerships, the EU is seeking to strengthen connectivity across the world, by collaborating with like-minded countries to tackle the digital divide at the international level, based on four pillars of the EUs so-called Digital Compass Strategy - skills, infrastructures, the transformation of business and of public services. The underlying objective is to foster a fair, inclusive and equal digital environment for all. The EU currently has partnerships with India, US, Japan, Korea and Singapore. The Digital Partnerships are focused on safety and security in the following areas: secure 5G/6G; safe and ethical applications of artificial intelligence; and the resilience of global supply chains in the semiconductor industry.

Taking first the EU-US partnership, both reaffirm their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies. The EU-US partnership put emphasis on the risks and opportunities of generative AI, including the preparation of a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management." Three dedicated expert groups are focused now on AI terminology and taxonomy, cooperation on Al standards and tools for trustworthy Al and risk management, and identifying the existing and emerging AI risks.

The groups have issued a list of 65 key AI terms essential to understanding risk-based approaches to AI, along with their EU and US interpretations and shared EU-US definitions; and mapped the respective involvement of the EU and the US in standardisation activities with the goal of identifying relevant AIrelated standards of mutual interest. They have agreed to bring these approaches into cooperation in multilateral discussions such as the G7 and the Organisation for Economic Co-operation and Development (OECD).

The newly formed EU-India Trade and Technology Council has the objective to tackle strategic challenges of trusted technology. The TTC is cast as a permanent structure for dialogue on digital and trade policies, following the format of TTC between the EU and the United States in areas such as digital connectivity, green technology and trade. It met for the first time in Brussels in May 2023, paving the way for cooperation in several strategic areas including in AI governance.

The TTC will offer political guidance as well as the necessary structure to effectively implement political decisions, coordinate technical endeavours, and ensure accountability at the political level. It will help increase EU-India bilateral trade, which is at a historical high, with 120 billion worth of goods traded in 2022. In 2022, according to statistics, 17 billion of digital products and services were traded.

The TTC is divided into several working groups, the first of them being relevant for our purposes, namely the WG on Strategic Technologies, Digital Governance, and Digital Connectivity, which will work on areas such as digital connectivity, AI, 5G/6G, cloud systems, Quantum Computing, Semiconductors, digital training, and big tech platforms. The aim is to find convergence on several aspects of digital policy (with the widest underlying disagreement being over the approach to cross-border data flow regulation and questions surrounding data localization).

The meetings conclusions also point to coordinating policies on AI and semiconductors and working together to bridge the digital skills gap. Cooperation between the two largest democracies on the planet on global digital policy is bound to facilitate access to the rapidly expanding Asian market.

Let us now try to draw some brief conclusions from the foregoing.

First, the ideological differences between countries on whether and how to regulate AI could have broader geopolitical consequences for managing AI and information technology in the years to come. Control over strategic resources, such as data, software, and hardware has become important for all countries. This is demonstrated by discussions over international data transfers, resources linked to cloud computing, the use of open-source software, and so on.

These developments seem, at least for now, to increase fragmentation, mistrust, and geopolitical competition, and as such pose enormous challenges to the goal of establishing an agreed approach to artificial intelligence based on respect for human rights.

To some extent, however, values are evolving into an ideological mechanism that aims to ensure a human rights-centred approach to the role and use of AI. Put differently, an alliance is currently forming around a human rights-oriented view of socio-technical governance, which is embraced and encouraged by like-minded democratic nations. This to me is the direction India, the worlds most populous democracy, should take and engage in greater coordination of developing evaluation and measurement tools that contribute to credible AI regulation, risk management, and privacy-enhancing technologies.

Secondly, we definitely need to avoid the fragmentation of technological ecosystems. Securing AI alignment at the international level is likely to be the major challenge of our century. Like the EU AI Act, the US Algorithmic Accountability Act of 2022 requires organisations to perform impact assessments of their AI systems before and after deployment, including providing more detailed descriptions of data, algorithmic behaviour, and forms of oversight.

Thirdly, undoubtedly, AI will continue to revolutionise society in the coming decades. However, it remains uncertain whether the worlds countries can agree on how technology should be implemented for the greatest possible societal benefit.

Fourth and finally, no matter how AI governance will be finally designed, the way in which it is done must be understandable to the average citizen, to businesses, and to practising policymakers and regulators confronted with a plethora of initiatives at all levels. Al regulations and standards need to be in line with our reality. Taking AI to the next level means increasing the digital prowess of global citizens, fixing the rules for the market power of tech giants, and understanding that transparency is part of the responsible governance of AI. And at the global level, it will be crucial to constantly cooperate strategically with partners within the framework of the International Digital Partnership on AI.

Read the rest here:
Opinion | Artificial Intelligence Whither India? - News18

Read More..

Bitcoin Price Prediction: Tradecurve a safe haven as Ethereum sell … – Finbold – Finance in Bold

With Ethereum (ETH) and Bitcoin (BTC) facing a selling spree, investors are in dire need of a secure alternative. Say hello toTradecurve: the perfect savior in the current investment landscape. Comparatively, this new project is estimated for higher returns and unparalleled financial stability. Check out more details below.

>>BUY TCRV TOKENS NOW<<

Erich Garcia Cruz, a Bitcoin (BTC) enthusiast and business person in Cuba, has been vocal about the cryptos adoption in the country. He suggests private businesses can benefit from implementing Bitcoin (BTC) as a currency.

In fact, Cruz has been supporting Bitcoin (BTC) adoption since 2020. Cruz will also participate in a documentary where he will explain to Cuban people how they can use Bitcoin (BTC) to get rich.

Furthermore, Cruz addresses Cuba as a communist economy where the countrys communist party will oppose Bitcoin (BTC). However, he believes that Bitcoin (BTC) can bring the countrys independence from the central parties. Currently, Bitcoin (BTC) is trading at $25,824.28.

Such a high price can also become an obstacle to its adoption. Moreover, experts predict it to reach $31,005.61 by the year-end.

Recently, Ethereum (ETH) brought two new upgrades to re-engage its past investors. One of the upgrades is called Shapella. Ethereum (ETH) developers created Shapella to enable holders to unstake their tokens.

The idea was to give Ethereum (ETH) holders control over their stakes, hoping to encourage them to re-stake.

The upgrade came after Ethereum (ETH) suffered continuous withdrawals from users. The upgrade recorded a high Ethereum (ETH) token net inflow. It was also reported that Ethereums (ETH) upgrade grew liquid staking tokens/derivatives (LSD).

However, the upgrade hasnt caused any drastic change in Ethereums (ETH) price. Currently, it is trading at $1,688.08, a 2.99% rise in a day. Experts have predicted Ethereum (ETH) to reach a mere $1,691.96 by the year-end.

Tradecurveis here to redefine the world of online trading by seamlessly blending the best aspects of centralized and decentralized exchanges into one powerful platform. Tradecurve eliminates the hassle of juggling multiple accounts on different platforms.

With just a single account, investors gain access to an extensive array of markets, including forex, stocks, commodities, and cryptocurrencies. Last year crypto investment assets trading volume increased by 127%, which can be boosted this year by the new platforms excellent accessibility.

Unlike other exchanges, including Coinbase, Binance, etc., this platform believes in simplicity and accessibility. For example, only the users email address is required to register an account with no intrusive KYC procedures. Users getultra-competitive feesand spreads, ensuring higher maximized profits and unnecessary expenses.

It leverages cutting-edge technology to deliver low-latency, and ultra-fast order execution. Capitalize on market opportunities as they arise, with orders executed swiftly and accurately. Join its 4th live presale today, and discover a trading experience transcending boundaries.

TCRVempowers traders of all backgrounds to take control of their financial futures. The token price has already soared from$0.015 to $0.018, and experts predict it will skyrocket to$0.025.

Brace yourself for its projected100xgrowth in just a few months. Invest today and reap unimaginable returns!

For more information about the Tradecurve presale:

Click Here For Website

Click Here To Buy TCRV Presale Tokens

Follow Us Twitter

Join Our Community on Telegram

Originally posted here:

Bitcoin Price Prediction: Tradecurve a safe haven as Ethereum sell ... - Finbold - Finance in Bold

Read More..