Page 2,847«..1020..2,8462,8472,8482,849..2,8602,870..»

Physical works out the dark side of the mind in an honest way – Metro US

Sign up for our Daily Email newsletter to stay up-to-date on the latest local news throughout Philadelphia.

The world of 1980s aerobics takes on a new kind of form in Apple TV+s latest dark comedy starring Rose Byrne, Physical. Instead of focusing on the outward benefits of fitness, this series from creator Annie Weisman is more than skin deep, taking a deeper look at eating disorders and negative self images.

Byrne plays Sheila, a housewife in San Diego unhappy with her appearance. Audiences right away get a front row seat to inside Sheilas mind, and the way that she talks to herself. Whether its comparing her looks to a friend, chastising herself for wanting to eat dessert or carbs, or spending her life savings on gorging on burgers whenever she feels out of control, its a new look at eating disorders that hasnt been shown before. It shows the high- functioning and exhausting side of living with a constant magnifying glass up to yourself where you are your own worst critic.

Sheila and her husband (Rory Scovel), a once-radical idealist turned college professor, are thrown for a loop within the first few episodes of the series and in an effort to raise more money and spark a new curiosity, Sheila embarks in the world of aerobics, which eventually helps her find her own voice. The hilarious narration is coupled with Sheilas inner thoughts, and perhaps what is most provoking about this series is just how relatable our own dark thoughts and feelings can be.

Weisman sat down with Metro to discuss the personal influences that led her to create Physical.

Being a personal story, what parts of your life was Physical drawn from?

I had really struggled for decades with eating disorders and it was something I didnt share privately or publicly. It was a quiet part of my life. I just reached a place of feeling fed up with the struggle and decided to try and express it on paper. I had a lot of fearthe illness is really good at telling you that if you share it, it will define you and destroy you, and none of that is actually true.

I think anyone whos in any kind of recovery will tell you that these things thrive in shame and they thrive in secrecy and actually sharing the story has been really liberating and has really connected me to other people. So, that was the impulse that started [with me] wanting to tell the story and then the decision to put it in this unique world came separately. I really wanted to be able to tell the story through an unexpected lens.

Right away we see Sheila talking to herself very negatively, and thats something that stood out to me as not often shown on TV.

I think when people think about eating disorders, they think about the behavior, and really its the thoughts and ideas and the voice that it comes fromat least for me. I really hadnt seen anything on screen that represented how I experienced the struggle, so I wanted to explore that distance between what I projected to the world and how I felt on the inside, which was so different.

I knew people had no idea how much I reserved the rage, anger and pain I felt for myself. So, that was part of the inspiration for the show to accurately represent the way this disorder manifested in me, which was this real divide between the exterior and interior. I just tried to be authentic about it. Ive found as Ive started to share it and as Ive started to share it with the creative collaborators that people related to that feeling. Its something that I thought I was really alone with, because it was so private, but as Ive started to share it, people are sharing with me that it feels familiar.

Another misconception is that someone who is petite might not experience these thoughts and feelings.

Exactly. Rose is the real dream actress for this part in so many ways: She understands that this makes you excellent at disguising how you feel on the inside. You would never know that someone who to everyone else looks so comfortable in their skin and beautiful would have this self-image and would beat themselves up in this way. Its a lie that women tell themselves, its how they feel on the inside and its not how it looks on the outside.

At its core, it isnt solved by changing the way you look. Its only solved by tapping into your true emotions and changing the way you feel. Thats what the journey of this show is about, we start in a really dark place where theres such a divide in her and then were moving towards this more integrative self where she takes that and stops beating herself up with that voice and starts unleashing it where it belongsout in the world.

That shift is definitely seen when Sheila starts to teach aerobics herself.

Thats absolutely right. Its not about doing aerobics, its about finding her voice as a teacher. Its about harnessing that tough inner voice and using it to inspire others.

What would you tell women on their own journey to realizing who their true self really is to expect from watching Sheilas journey from start to finish?

Well, I hope people who struggle with the same kind of divide will see themselves and find some comfort like I did and can share that experience with others. I also hope they see and understand that what you think is your biggest weakness can actually be your biggest strengththat was my experience in writing and getting it out. Tapping into something that I was ashamed of made me feel more powerful. I hope people who watch it feel that way too.

What went into creating the look and feel of the time and the world of aerobics for the show?

Its set in the world I grew up in, San Diego in the 70s and 80s. I really wanted to depict this Southern California world that was going though this change. We get to depict beach culture, we get to depict mall culture, we get to depict the new world of fitness culture that we take for granted nowthese radical spaces for women collectively working out at a time when that was still a taboo for women for be sweating and building muscles in public. So we get to play with building that world as it was just beginning.

What are you most excited for audiences to see in the series?

Im just really excited for people to see Roses performance because I think she is going to blow peoples minds. Even big fans of hers, I dont think anyone has seen her play this kind of central, layered and dynamic role and I think what shes capable of and her fearlessness is going to really blow people away. Im really excited to share that.

The first three episodes of Physical premiere on Apple TV+ June 18.

The rest is here:
Physical works out the dark side of the mind in an honest way - Metro US

Read More..

Bengio Team Proposes Flow Network-Based Generative Models That Learn a Stochastic Policy From a Sequence of Actions – Synced

For standard reinforcement learning (RL) algorithms, the maximization of expected return is achieved by selecting the single highest-reward sequence of actions. But for tasks in a combinatorial domain such as drug molecule synthesis, where exploration is important the desired goal is no longer to simply generate the single highest-reward sequence of actions, but rather to carefully sample a diverse set of high-return solutions.

To address this specific machine learning problem, a research team from Mila, McGill University, Universit de Montral, DeepMind and Microsoft has proposed GFlowNet, a novel flow-network-based generative method that can turn a given positive reward into a generative policy that samples with a probability proportional to the return.

The team summarizes their contributions as:

In graph theory, a flow network is a directed graph with sources and sinks, where each edge has a capacity, and each edge receives a flow. The motivation task of this flow network is iterative black-box optimization, where the agent has to compute a reward for a large batch of candidates at each round. The idea behind the proposed GFlowNet is to view the probability assigned to an action given a state as the flow associated with a network whose nodes are states and outgoing edges from that node are deterministic transitions driven by an action.

The team defines a flow network with a single source, where the sinks of the network correspond to the terminal states. Given the graph structure and the outflow of the sinks, they attempt to calculate a valid flow between nodes. Notably, such construction corresponds to a generative model, and the researchers conduct rigorous proofs and explanations to prove that the result is a terminal state (a sink), with probability proportional to the return if we follow the flow.

Based on the above theoretical results, the researchers then create a learning algorithm. They propose approximating the flows such that the flow conditions are obtained at convergence with enough capacity in their estimator of the flows. In this way, they will yield an objection function for GFlowNet where the minimization result of this objective achieves their desiderata a generative policy that samples with a probability proportional to the return.

The team conducted various experiments to evaluate the performance of the proposed GFlowNet. For example, in an experiment that generates small molecules, the team reported the empirical distribution of rewards and the average reward of the top-k as a function of learning.

Compared to the baseline MARS (Xie et al., 2021), GFlowNet found more high-reward molecules. The results also show that for both GFlowNet and MARS, the more molecules are visited, the better they become, with a slow convergence towards the proxys max reward.

Overall, GFlowNet achieves competitive results against baseline methods on the molecule synthesis domain task and performs well on a simple domain where there are many modes to the reward function. The research team believes GFlowNet can serve as an alternative approach for turning an energy function into a fast generative model.

The implementations are available on the project GitHub. The paper Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation is on arXiv.

Author: Hecate He |Editor: Michael Sarazen, Chain Zhang

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

Follow this link:
Bengio Team Proposes Flow Network-Based Generative Models That Learn a Stochastic Policy From a Sequence of Actions - Synced

Read More..

Consumer Companies Bail on Non-Core Assets: "Deep is the New Wide" – Mergers & Acquisitions

Consumer sector players are increasingly eager to part with assets seen as non-core in a market that rewards specialization, says Kearney partner Bahige El-Rayes. Deep is the new wide, El-Rayes says. It used to be about diversified products, now its about consistency, agility. In many cases consumer [companies] realized they dont need everything they have.

That realization is widely felt. Three quarters of sector executives at the top 20 consumer companies surveyed by global strategy and management consulting firm Kearney said that they could carry out a major divestiture. The rationale? Strategic restructuring to strengthen balance sheets and enable growth.

The novelty in the survey data, released today, is in the type of growth corporates seek on the back of disposals. Rather than deploying capital in megadeals that fundamentally reorient strategy, executives tell Kearney that they want to pursue small, discrete acquisitions that give them exposure to a market without weighing in as a potential drag on the balance sheet should markets shift.

The word is defined optionality, El-Rayes explains. Companies are increasingly like venture capitalists; they need to be more agile and less certain in terms of trends and capability.

To capture upside to an emerging technology, for instance, executives are now thinking, There are a ton of technologies. I want to be positioned but dont want a billion dollar investment. I want a footprint but not too much where Ill be exposed in a few years.

Grocery shoppers are clamoring for personalized nutrition and shopping options that improve gut health, but how companies position themselves for a possibly transient shift in preferences is less clear.

Health and food are coming together, and its not certain if its a fad or not, El-Rayes frames the scenario. Is that something that consumer packaged goods companies have a role to play in, or are they the big evil? Who is the culprit for the processed sugar? Where are the growth opportunities? Should packaged goods companies double down on that?

Rather than seeking a transformational merger of equals, corporates could play the possible outcomes by acquiring a small health food company. The parent can drive reverse integration such that the values and positioning of the fresh-focused target permeate the wholeco, driving organizational change without the price tag and legacy assets of a larger deal.

Mondelezs 2019 acquisition of a majority stake in Perfect Snack owner Perfect Brands is a zeitgeist transaction: the maker of Oreos and Cadbury eggs acquired organic, non-GMO clean snacks in the deal. Meanwhile, Cargill invested $75 million in textured vegetable protein maker Puris to bring its total investment to $100 million the same year.

This preference for option value is evident in trending valuations. Deal multiples for small and midsize deals are increasing relative to large deals, which have cumulatively fallen 34 percent since 2018. When asked what sized targets they plan to acquire, approximately 60 percent of respondents pointed to $500 million targets and below.

While divestitures are top of mind in consumer sector boardrooms across the globe, so are the depths of redeployment. Kearney, working with Dealogic data, projects consumer M&A to rebound off 2020 lows should first quarter transaction rates continue apace.

El-Rayes is co-author of the report Forged in Crisis, Poised to Innovate, and leads the UK and Ireland consumer practice of Kearney.

Read the original:
Consumer Companies Bail on Non-Core Assets: "Deep is the New Wide" - Mergers & Acquisitions

Read More..

Towards Broad Artificial Intelligence (AI) & The Edge in 2021 – BBN Times

Artificial intelligence (AI) has quickened its progress in 2021.

A new administration is in place in the US and the talk is about a major push forGreen Technologyand the need to stimulate next generation infrastructure including AI and 5G to generate economic recovery withDavid Knight forecasting that 5G has the potential - thepotential- to drive GDP growth of 40% or more by 2030.TheBiden administration has statedthat it will boost spending in emerging technologies that includes AI and 5G to $300Bn over a four year period.

On the other side of the Atlantic Ocean, the EU have announced aGreen Dealand also need to consider theEuropean AI policyto develop next generation companies that will drive economic growth and employment. It may well be that theEU and US(alongside Canada and other allies) will seek ways to work together on issues such as 5G policy and infrastructure development. TheUK will be hosting COP 26and has also made noises about AI and 5G development.

The world needs to find a way to successfully end the Covid-19 pandemic and in the post pandemic world move into a phase of economic growth with job creation. An opportunity exists for a new era of highly skilled jobs with sustainable economic development built around next generation technologies.

AI and 5G: GDP and jobs growth potential plus scope to reduce GHG emissions (source for numbers PWC / Microsoft, Accenture)

The image above sets out the scope for large reductions in emissions of GHGs whilst allowing for economic growth.

GDP and jobs growth will be very high on the post pandemic agendas of governments around the world. At the same time those economies that truly proposer and grow rapidly in this decade will be those who adopt Industry 4.0 technology and in turn will lead to a shift away from the era of heavy fossil fuel consumption towards a digital world that may be powered by renewable energy and with transportation that is either heavily electric or over time, hydrogen based.

2021 will mark the continued acceleration of Digital Transformation across the economy.

Firms will be increasingly "analytics driven" (it needs to be stressed that analytics rather than data driven is the key term).Data is the fuelthat needs to be processed.Analytics provide the ability for organisations to make actionable insights.

Source for image above Lean BI

The examples of how Machine to Machine Communication at the Edge enabled by AI could work maybe demonstrated by the following image as an example:

In the image above the Machine to Machine communication allows for broadcast across the network that a person has been detected stepping onto the road so that even the car that does not have line of sight of the person is aware of their presence

It is important to note that AI alongside 5G networks will be at the heart of this transition to the world ofIndustry 4.0.

5G will play an important role as 5G networks are not only substantially faster than 4G networks, but they also enablesignificant reductions in latency in turn allowing for near real-time analytics and responses, and also enable far greater capacity for connection thereby facilitating massive machine to machine communication forIoT devices on the Edge of the network(closer to where the data is created on the device).

The image below sets out the speed advantage of 5G networks relative to 4G.

Source for image above Thales Group

However, as noted above 5G has many more advantages over 4G than speed alone as shown in the image below:

Source for image above Thales Group

The growth in Edge Computing will reduce the amount of data being sent backwards and forwards between a remote cloud server and thereby make the system more efficient.

Source for image above Thales Group

The economic benefits of 5G are set out below:

$13.2 Trillion dollars of global economic output

22.3 Million new jobs created

$2.1 Trillion dollars in GDP growth

Towards AI at the Edge (AIIoT)

To date AI has been most pervasive and effective in the areas of Social Media and Ecommerce giants whose large digital data sets give them an advantage and whereedge casesdon't matter so much in terms of their consequences. No fatalities, injuries or material damages arise from an incorrect recommendation for a video, a post, or an item of clothing, other than a bad user experience.

However, when we seek to scale AI into the real world, edge cases and interpretability matter. Issues such as causality and explainability become key in areas such as autonomous vehicles and robots and also in healthcare.

Equally data privacy and security also really matter. On the one hand as noted above, data is the fuel for Machine Learning models. However, on the other hand in areas such as healthcare much of that data is often siloed and decentralised plus also protected by strict privacy rules in the likes of the US (HIPAA) and Europe (GDPR). It is also an issue in areas such as Finance and Insurance where data privacy and regulation are of significant importance to the operations of financial services firms.

This is an area whereFederated LearningwithDifferential Privacycould play a big role in scaling Machine Learning across areas such as healthcare and financial services.

Source for image above NVIDIA What is Federated Learning?

It is also an area where the US and Europe could work together to enable collaborative learning and help scale Machine Learning that also provides for Data Security and Privacy for end users (patients). The Healthcare sector around the world is at breakpoint due to the strains of the Covid-19 pandemic and augmenting our healthcare workers with AI to reduce the strain upon them whilst ensuring that patient data security is maintained will be key to transforming our Healthcare systems to reduce the strain on them and deliver better outcomes for the patient.

Source for Image above TensorFlow Federated

For more on Federated Learning see:Federated Learning an Introduction.

In relation to AI, we will need to move away from the giant models and techniques that were predominant in the last decade towards neural compression (pruning)that in turn will enable models to operate more efficiently on the Edge and help preserve battery life of devices and also reduce carbon footprint with reduced energy consumption.

Furthermore, we won't only requireDeep Learning models that may inference on the Edge, but also models thatmay continue to learn on the Edge, on the fly, from smaller data sets and respond dynamically to their environments. This will be key to enabling effective autonomous systems such as autonomous vehicles (cars, drones) and also robots.

Solving for these challenges will be key to enabling AI to scale beyond Social Media and Ecommerce across the sectors of the economy.

It is no surprise that the most powerful AI companies today and last few years tend to be from the Ecommerce and social media sector.

Furthermore, the images below from Valuewalk show how ByteDance (owner of TikTok) is the world's most valuable Unicorn and an AI company.

Source for image above Valuewalk, Tipalti, The Most Valuable Unicorn in the World 2020

Venture Capitalist and Angel Investors should also try to understand that in order to scale AI startup ventures access to usable data and meeting the requirements of their customer in terms of usability (which may include some or all of transparency, causality, explainability, model size, ethics) are key for many sectors.

The number of connected devices and volume of data is forecast to grow dramatically as Digital Technology continues to expand its reach for example the image below shows a forecast from Statista for 75 Billion internet connected devices by 2025, an average of over 9 per person on the planet!

Data will grow but an increasing amount of data will be decentralised data dispersed around the Edge.

Source for image above IDC

In factIDC forecast that" Theglobal dataspherewill grow from 45 zettabytes in 2019 to 175 by2025. Nearly 30% of the world's data will need real-time processing. ... Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90 ZB of data in2025."

Illustration of the AI IoT across the Edge

Source for infographic images below:Iman Ghosh VisualCapitalist.com

In the past decade key Machine Learning tools such as XG Boost, Light Gradient Boosting Machines and Cat Boost emerged (approximately 2015 to 2017) and these tools will continue to be popular with Data Scientists for powerful insights with structured data using supervised learning. No doubt we will see continued enhancements in Machine Learning tools over the next few years.

In relation to areas such as Natural Language Processing (NLP), Computer Vision and Drug Discovery efforts Deep Learning will continue to be the effective tool. However, it is submitted that increasingly the techniques will move towards the following:

Transformers(including inComputer Vision);

Neuro Symbolic AI(hybrid AI that combines Deep Learning with symbolic Logic);

Neuroevoutionary(hybrid approaches that combinedDeep Learning with Evolutionary algorithm approaches);

Some or all of the above combined withDeep Reinforcement Learning.

This will lead to an era ofBroad AIas AI starts to move beyond narrow AI (performing just one task) and starts working with multitasking but not at the level where AI can match the human brain (AGI).

My own work is focused on the above hybrid approaches for Broad AI as we seek to find ways to scale AI across the economy beyond Social Media and Ecommerce the above will be key to enabling true Digital Transformation with AI across traditional sectors of the economy and enabling our moving into the era of Industry 4.0.

Source for Image above David Cox, IBM Watson

MIT IBM Watson Lab define Broad AIand the types of AI as follows:

"Narrow AIis the ability to perform specific tasks at a super-human rate within various categories, from chess, Jeopardy!, and Go, to voice assistance, debate, language translation, and image classification."

"Broad AIis next. Were just entering this frontier, but when its fully realized, it will feature AI systems that use and integrate multimodal data streams, learn more efficiently and flexibly, and traverse multiple tasks and domains. Broad AI will have powerful implications for business and society."

"Finally,General AIis essentially what science fiction has long imagined: AI systems capable of complex reasoning and full autonomy. Some scientists estimate that General AI could be possible sometime around 2050 which is really little more than guesswork. Others say it will never be possible. For now, were focused on leading the next generation of Broad AI technologies for the betterment of business and society."

I would addArtificial Super Intelligence(or Super AI) to the list above as this is a type of AI that often gains much attention in Hollywood movies and television series.

In Summary

Whether one views 2021 as the first year of a decade or not, 2021 will mark a year for reset across the economy and hopefully one whereby we start to move beyond the Covid pandemic to a post pandemic world.

California will remain as a leading area for AI development with the presence of Stanford, UC Berkley, Caltech, UCLA, and University of San Diego. However, other centres for AI will continue to grow around the US and the world, for example Boston, Austin, Toronto, London, Edinburgh, Oxford, Cambridge, Tel Aviv, Dubai, Abu Dhabi, Singapore, Berlin, Paris, Barcelona, Madrid, Lisbon, Sao Paulo, Tallinn, Bucharest, Kyiv / Kharkiv, Moscow and of course across China (many other examples of cities could be cited too). AI will become a pervasive technology that is increasingly in the devices (including within our mobile phones) that we interact with everyday and not just when we enter our social media accounts or go online to shop.

It will also mark a reset for AI to be increasingly on the Edge and across the "real-world" sectors of the economy with the emergence of Broad AI to take over from Narrow AI as we move across the decade.

Smaller models will be more desirable / more useful

GPT-3is an exciting development in AI and shows the potential for Transformer models, however, in the future small will be beautiful and crucial. The human brain does not require the amount of server capacity of GPT-3 and uses far less energy. For AI to scale across the edge we'll need powerful models that are energy efficient and optimised to work on small devices. For exampleMao et al. set out LadaBERT: lightweight adaptation of the BERT ( a large Transformer language model) through hybrid model compression.

The authors note "...a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviours of BERT."

"However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model."

"In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude."

Reducing training overheads and avoiding unsatisfactory latency of user requests will also be a key objective of Deep Learning development and evolution over the course of 2021 and beyond.

My Vision of Connectionism: Connecting one human to another (we're all human beings), connecting AI with AI, and AI with humans all at the level of the mind.

When I adopted the @DeepLearn007 handle on Twitter many years ago, I was inspired by the notion ofconnectionism, and the image that I selected for the account illustrates how 2 human beings could connect at the level of the brain and how the exchange of information, in effect ideas, drives innovation and the development of humanity. In the virtual world much of that occurs at the level of data and the analytical insights that we gain from that data through application of AI (Machine Learning and Deep Learning) to generate responses.

I remain a connectionist, albeit an open minded one. I believe that Deep Neural Networks will remain very important and the cornerstone of AI development but just like Deep Reinforcement Learning combined Reinforcement Learning with Deep Learning to very powerful effect with the likes ofAlphaGo,AlphaZero, andMuZeroresulting, so too developing hybrid AI that combines Deep Learning with Symbolic and Evolutionary approaches will lead to exciting new product developments and enable Deep Learning to scale beyond Social Media and Ecommerce sectors where the likes of medics and financial services staff wantcausal inferenceandexplainabilityfor trust in the AI decision making. For example,Microsoft Researchstate that "understanding causality is widely seen as a key deficiency of current AI methods, and a necessary precursor for building more human-like machine intelligence."

Furthermore, in order for autonomous vehicles to truly take off we'll need the model explainability for situations where things have gone wrong in order to understand what happened and how we may reduce the probability of the same outcome in the future.

The next generation of AI will be in the direction towards the era Broad AI and the adventure will be here in 2021 as we move towards the Edge, towards a better world as we move beyond the scars and challenges of 2020. The journey may require scaled up 5G networks around the world to really transform the broader economy and that may only really start to happen at the end of the year and beyond but the direction of the pathway is clear.

The exciting potential for healthcare, smart industry, smart cities, smart living, education, and every other sector of the economy will mean that a new of businesses will emerge that we cannot even imagine today.

Perhaps a good point to conclude is with the forecast from Ovum and Intel for the impact of 5G for the media sector (of courseAI will play a big role alongside 5Gin developing new hyper personalised services and products andhave a symbiotic relationship together).

Source for the image above:Intel Study Finds 5G will Drive $1.3 Trillion in New Revenues in Media and Entertainment Industry by 2028

See more here:
Towards Broad Artificial Intelligence (AI) & The Edge in 2021 - BBN Times

Read More..

It’s more than just skin-deep: Feel and look amazing with Bubble Skincare | Sponsored – Harvard Crimson

*Sabrina is a fictional character whose experiences are meant to represent young people around the country.

In a world where young people are constantly bombarded with unrealistic expectations, information overload, and knowledge of the worlds issues, it is a stressful time for teens and young adults to grow up. Furthermore, as their skin begins to change, they are met with a skincare industry that is often confusing, misinforming, and overwhelming. Upon realizing that this industry did not have a high quality, accessible, and affordable option for teens and young people, Bubble Skincare sought to change this by creating a brand that would focus not only on helping people take care of their skin, but also making sure that people are empowered to take care of their mental health. Sabrina, a fictional freshman in college, just started using Bubble Skincare for her night routine, and has already begun to see a difference in her skin and feeling more confident.

Whether you already have some understanding of skin care or are completely new, whether you have dry skin, oily skin, sensitive skin, or anything in between, and no matter your identity, Bubble Skincare has something for you to start looking and feeling better. Furthermore, not only will you set a foundation for healthy skin and mind as you continue to grow and thrive, you will also help others with Bubble Skincare donating 1 percent of proceeds to organizations that provide mental support to teens.

Put your best face forward with Bubble treat both your skin and yourself with love!

The Crimson's news and opinion teamsincluding writers, editors, photographers, and designerswere not involved in the production of this article.

Read more:
It's more than just skin-deep: Feel and look amazing with Bubble Skincare | Sponsored - Harvard Crimson

Read More..

AI in Healthcare Market Drivers, Challenges, Opportunities and Competitive Strategy Over 2021-2031 | Nuance Communications, Inc., DeepMind…

Scope of Trending Report:

GlobalAI in Healthcare Market:The report provides a valuable source of insightful data for business strategists and competitive analysis of the AI in Healthcare Market. The main aim of this AI in Healthcare report is to help the user understand the market about its definition, segmentation, market potential, influential trends, and the challenges that the market is facing. This report will aid the users in understanding the market in depth.

The report on AI in Healthcare market offers an overview of several major countries spread across various geographic regions over the globe. The report concentrates on recognizing various market developments, dynamics, growth drivers and factors hampering the market growth. Further, the report delivers comprehensive insights into numerous growth opportunities and challenges based on various types of products, applications, end users and countries, among others.

Download a FREE sample copy of this report:https://www.insightslice.com/request-sample/489

We provide detailed product mapping and investigation of various market scenarios. Our expert analysts provide a thorough analysis and breakdown of the market presence of key market leaders. We strive to stay updated with the recent developments and follow the latest company news related to the industry players operating in the global AI in Healthcare market. This helps us to comprehensively analyze the individual standing of the companies as well as the competitive landscape. Our vendor landscape analysis offers a complete study to help you gain the upper hand in the competition.

The major manufacturers covered in this report:Nuance Communications, Inc., DeepMind Technologies Limited, IBM Corporation, Intel Corporation and Microsoft and NVIDIA Corporation.

Scope of the report: AI in Healthcare Market

The research takes a closer look at prominent factors driving the growth rate of the prominent product categories across major geography. Furthermore, the study covers a lot of the sales, gross margin, consumption capacity, spending power and customer preference across various countries. The report offers clear indications how the AI in Healthcare market is expected to witness numerous exciting opportunities in the years to come. Critical aspects including the growing requirement, demand and supply status, customer preference, distribution channels and others are presented through resources such as charts, tables, and infographics.

Request For Customization: https://www.insightslice.com/request-customization/489

The report answers questions such as:

COVID 19 Impact Analysis on AI in Healthcare Market

Given the scale of the pandemic, technology will play a crucial role in addressing every facet of COVID-19. There is also a gradual increase in the number of use cases of AI in Healthcare to surge the demand. Major applications introduced using AI in Healthcare systems are for security assessment and identity verification. In many countries, law enforcement and organization have shifted from legacy systems to AI in Healthcare solutions to reduce the overall spread of COVID-19.

The AI in Healthcare market report covers the following regions:

* North America:U.S., Canada, Mexico* South America:Brazil, Venezuela, Argentina, Ecuador, Peru, Colombia, Costa Rica* Europe:U.K., Germany, Italy, France, Netherlands, Belgium, Spain, Denmark* APAC:China, Japan, Australia, South Korea, India, Taiwan, Malaysia, Hong Kong* The Middle East and Africa:Israel, South Africa, Saudi Arabia

HowInsightSLICE is Different Form Other market research Company :

InsightSLICE is a prominent market research and consulting firm offering action-ready syndicated research reports, custom market analysis, consulting services, and competitive analysis through various recommendations related to emerging market trends, technologies, and potential absolute dollar opportunities.

Note: * The discount is offered at the Standard Price of the report.

Ask For Discount Before Purchasing This Business Report @ https://www.insightslice.com/request-discount/489

The study on Global AI in Healthcare Market provides crucial insights such as:

About Us:

We are a team of research analysts and management consultants with a common vision to assist individuals and organizations in achieving their short and long term strategic goals by extending quality research services. The inception of insightSLICE was done to support established companies, start-ups as well as non-profit organizations across various industries including Packaging, Automotive, Healthcare, Chemicals & Materials, Industrial Automation, Consumer Goods, Electronics & Semiconductor, IT & Telecom and Energy among others. Our in-house team of seasoned analysts hold considerable experience in the research industry.

Contact Info422 Larkfield Ctr #1001Santa Rosa,CA 95403-1408info@insightslice.com+1 (707) 736-6633

More:
AI in Healthcare Market Drivers, Challenges, Opportunities and Competitive Strategy Over 2021-2031 | Nuance Communications, Inc., DeepMind...

Read More..

NVIDIA and the battle for the future of AI chips – Wired.co.uk

An AI chip is any processor that has been optimised to run machine learning workloads, via programming frameworks such as Googles TensorFlow and Facebooks PyTorch. AI chips dont necessarily do all the work when training or running a deep-learning model, but operate as accelerators by quickly churning through the most intense workloads. For example, NVIDIAs AI-system-in-a-box, the DGX A100, uses eight of its own A100 Ampere GPUs as accelerators, but also features a 128-core AMD CPU.

AI isnt new, but we previously lacked the computing power to make deep learning models possible, leaving researchers waiting on the hardware to catch up to their ideas. GPUs came in and opened the doors, says Rodrigo Liang, co-founder and CEO of SambaNova, another startup making AI chips.

In 2012, a researcher at the University of Toronto, Alex Krizhevsky, walloped other competitors in the annual ImageNet computer vision challenge, which pits researchers against each other to develop algorithms that can identify images or objects within them. Krizhevsky used deep learning powered by GPUs to beat hand-coded efforts for the first time. By 2015, all the top results at ImageNet contests were using GPUs.

Deep learning research exploded. Offering 20x or more performance boosts, NVIDIAs technology worked so well that when British chip startup Graphcores co-founders set up shop, they couldnt get a meeting with investors. What we heard from VCs was: what's AI? says co-founder and CTO Simon Knowles, recalling a trip to California to seek funding in 2015. It was really surprising. A few months later, at the beginning of 2016, that had all changed. Then, everyone was hot for AI, Knowles says. However, they were not hot for chips. A new chip architecture wasnt deemed necessary; NVIDIA had the industry covered.

Whats in a name?

GPU, IPU, RPU theyre all used to churn through datasets for deep learning, but the names do reflect differences in architecture.

Graphcore

Graphcores Colossus MK2 IPU is massively parallel with processors operated independently, a technique called multiple instruction, multiple data. Software is written sequentially, but neural network algorithms need to do everything at once. To address this, one solution is to lay out all the data and its constraints, like declaring the structure of the problem, says Graphcore CTO Simon Knowles. Its a graph hence the name of his company.

But, in May 2016, Google changed everything, with what Cerebras Feldman calls a swashbuckling strategic decision, announcing it had developed its own chips for AI applications. These were called Tensor Processing Units (TPUs), and designed to work with the companys TensorFlow machine learning programming framework. Knowles says the move sent a signal to investors that perhaps there was a market for new processor designs. Suddenly all the VCs were like: where are those crazy Brits? he says. Since then, Graphcore has raised $710 million (515 million).

NVIDIAs rivals argue that GPUs were designed for graphics rather than machine learning, and that though their massive processing capabilities mean they work better than CPUs for AI tasks, their market dominance has only lasted this long due to careful optimisation and complex layers of software. NVIDIA has done a fabulous job hiding the complexity of a GPU, says Graphcore co-founder and CEO Nigel Toon. It works because of the software libraries theyve created, the frameworks and the optimisations that allow the complexity to be hidden. Its a really heavy lifting job that NVIDIA has undertaken there.

But forget GPUs, the argument goes, and you might design an AI chip from scratch that has an entirely new architecture. There are plenty to choose from. Googles TPUs are application-specific integrated circuits (ASICs), designed for specific workloads; Cerebras makes a Wafer-Scale Engine, a behemoth chip 56 times larger than any other; IBM and BrainChip make neuromorphic chips, modelled on the human brain; and Mythic and Graphcore both make Intelligence Processing Units (IPU), though their designs differ. There are plenty more.

But Cantazaro argues the many chips are simply variations of AI accelerators the name given to any hardware that boosts AI. "We talk about a GPU or TPU or an IPU or whatever, but people get too attached to those letters," he says. We call our GPU that because of the history of what weve done but the GPU has always been about accelerated computing, and the nature of the workloads people care about is in flux.

Can anyone compete? NVIDIA dominates the core benchmark, MLPerf, which is the gold standard for deep-learning chips, though benchmarks are tricky beasts. Analyst Karl Freund of Cambrian AI Research notes that MLPerf, a benchmarking tool designed by academics and industry players including Google, is dominated by Google and NVIDIA, but that startups usually dont bother to complete all of it because the costs of setting up a system are better spent elsewhere.

NVIDIA does bother and annually bests Googles TPU. Google invented MLPerf to show how good their TPU was, says Marc Hamilton, head of solutions architecture and engineering at NVIDIA Jensen [Huang] said it would be really nice if we show Google every time they ran the MLPerf benchmark how our GPUs were just a little bit faster than the TPU.

To ensure it came out on top for one version of the benchmark, NVIDIA upgraded an in-house supercomputer from 36 DGX boxes to a whopping 96. That required recabling the entire system. To do it quickly enough, they simply cut through the cables which Hamilton says was about a million dollars worth of kit and had new equipment shipped in. This may serve to highlight the bonkers behaviour driven by benchmarks, but it also inspired a redesign of DGX: the current-generation blocks can now be combined in groups of 20 without any rewiring.

Read the original:
NVIDIA and the battle for the future of AI chips - Wired.co.uk

Read More..

Accelerating Deep Learning on the JVM with Apache Spark and NVIDIA GPUs – InfoQ.com

Key Takeaways

Many large enterprises and AWS customers are interested in adopting deep learning with business use cases ranging from customer service (including object detection from images and video streams, sentiment analysis) to fraud detection and collaboration. However, until recently, there were multiple difficulties with implementing deep learning in enterprise applications:

In this tutorial we share how the combination of Deep Java Learning, Apache Spark 3.x, and NVIDIA GPU computing simplifies deep learning pipelines while improving performance and reducing costs. In this post, you learn about the following:

Data processing and deep learning are often split into two pipelines, one for ETL processing, and one for model training. Enabling deep learning frameworks to integrate with ETL jobs allows for more streamlined ETL/DL pipelines.

Apache Spark has emerged as the standard framework for large-scale, distributed, data analytics processing. Apache Spark's popularity comes from the ease-of-use APIs and high-performance big data processing. Spark is integrated with high-level operators and libraries for SQL, stream processing, machine learning (ML), and graph processing.

Many developers are looking for an efficient and easy way to integrate their deep learning (DL) applications with Spark. However, there is no official support for DL in Spark. There are libraries that try to solve this problem such as TensorFlowOnSpark, Elephas, and CERN, but most of them are engine-dependent. Also most of the Deep Learning Frameworks (PyTorch, TensorFlow, Apache MXNet) do not have good support for the Java Virtual Machine (JVM), which Spark runs on.

In this section, well walk through several DL use cases for different industries using Scala.

Machine learning and deep learning have many applications in the financial industry. J.P. Morgan summarized six initiatives for their machine learning applications: Anomaly Detection, Intelligent Pricing, News Analytics, Quantitative Client Intelligence, Smart Documents, Virtual Assistants. This indicates deep learning has its position in many business areas in financial institutions. A good example for this point comes from Monzo bank, a fast-growing UK-based challenger bank, which reached its 3 million customers in 2019. They successfully automated 30% to 50% of the potential users enquiries by applying Recurrent Neural Networks (RNNs) on their users sequential event data.

Customer experience is an important topic for most financial institutions. Another example of applying deep learning to improve customer experience is Mastercard, a first-tier global payment solution company. Mastercard successfully built a deep learning-based customer propensity recommendation system with Apache Spark and their credit card transaction data. Such a recommender can provide better and more suitable goods and services to their customers, potentially benefiting the customer, the merchants and Mastercard. Before this project, Mastercard built a Spark ML recommendation pipeline with traditional machine learning methods (i.e. matrix factorization with Alternating Least Square, or ALS) on their data consisting of over 1.4 billion transactions. In order to determine if new deep learning methods could improve the performance of their existing recommender system, they benchmarked 2 deep learning methods: Neural Collaborative Filtering and Wide and Deep Model. Both achieved a significant improvement compared to the traditional ALS implementation.

Financial systems require very high fault-tolerance and security levels. Java was widely used in these companies to achieve better stability. Since Financial systems also face the challenges of huge amounts of data (1.4 Billion transactions), big data pipelines like Apache Spark are a natural choice to process the data. The combination of Java/Scala with Apache Spark is predominant in these fields.

As the data continues to grow, there is a new type of company that mines and analyzes business data. They serve as a third-party to help their client to explore the valuable information from their data. This data is typically system logs, anonymous non-sensitive customer information, sales and transaction records. As an example, TalkingData is a data intelligence service provider that offers data products and services to provide businesses insights on consumer behavior, preferences, and trends. One of TalkingDatas core services is leveraging machine learning and deep learning models to predict consumer behaviors (e.g., likelihood of a particular group to buy a house or a car) and use these insights for targeted advertising. Currently, TalkingData is using a Scala based big data pipeline to process hundreds of million data a day. They built a Deep Learning model and used it across a Spark cluster to do distributed inference tasks. Compared to single machine inference, the Spark cluster reduced the total inference time from 8 hours to less than 3 hours. They chose DJL with Spark for the following reasons:

For the online retail industry, recommendations and Ads are important to provide a better customer experience and revenue. The data sizes are usually enormous and they need a big data pipeline for them to clean up and extract the valuable information. Apache Spark becomes a natural fit to help deal with these tasks.

Today more and more companies are taking a personalized approach to content and marketing. Amazon Retail used Apache Spark on Amazon EMR to achieve this goal. They created a multi-label classification model to understand customer action propensity across thousands of product categories and used these propensities to create a personalized experience for customers. Amazon Retail built a Scala-based big data pipeline to consume hundreds of million records and used DJL to do DL inference on their model.

As shown above, many companies and institutions are using Apache Spark for their Deep Learning tasks. However, with the growing size and complexity of their Deep Learning models, developers are leveraging GPUs to do their training and inference jobs. The CPU only computational power on Apache Spark is not sufficient enough to handle large models.

GPUs, with their massively parallel architecture, are driving the advancement of deep learning (DL) in the past several years. With GPUs, you can exploit data parallelism through columnar data processing instead of traditional row-based reading designed initially for CPUs. This provides higher performance and cost savings.

Apache Spark 3.0 represents a key milestone in this advancement, combining GPU acceleration with large-scale distributed data processing and analytics. Spark 3.0 can now schedule GPU-accelerated ML and DL applications on Spark clusters with GPUs. Spark conveys these resource requests to the underlying cluster manager. Also, when combined with the RAPIDS Accelerator for Apache Spark, Spark can now accelerate SQL and DataFrame data processing with GPUs without code changes. Because this functionality allows you to run distributed ETL, DL training, and inference at scale, it helps accelerate big data pipelines to leverage DL applications.

In Spark 3.0, you can now have a single pipeline, from data ingestion to data preparation to model training on a GPU-powered cluster.

Before Apache Spark 3.0, using GPUs was difficult. Users had to manually assign NVIDIA GPU devices to a Spark job and hardcode all configurations for every executor/task to leverage different GPUs on a single machine. Because the Apache Hadoop 3.1 Yarn cluster manager allows GPU coordination among different machines, Apache Spark can now work alongside it to help pass the device arrangement to different tasks. Users can simply specify the number of GPUs to use and how those GPUs should be shared between tasks. Spark handles the assignment and coordination of the tasks.

To leverage the best power from it, lets discuss the following two components:

The RAPIDS Accelerator for Apache Spark combines the power of the RAPIDS library and the scale of the Spark distributed computing framework. In addition, RAPIDS integration with ML/DL frameworks enables the acceleration of model training and tuning. This allows data scientists and ML engineers to have a unified, GPU-accelerated pipeline for ETL and analytics, while ML and DL applications leverage the same GPU infrastructure, removing bottlenecks, increasing performance, and simplifying clusters.

Apache Spark-accelerated end-to-end ML platform stack

NVIDIA worked with the Apache Spark community to add GPU acceleration on several leading platforms, including Google Cloud, Databricks, Cloudera and Amazon EMR making it easy and cost-effective to launch scalable, cloud-managed Apache Spark clusters with GPU acceleration.

For its experiments to compare CPU vs. GPU performance for Spark 3.0.1 on AWS EMR, the NVIDIA RAPIDS accelerator team uses 10 TB of simulated data and queries designed to mimic large scale ETL from a retail or company (similar to TPC-DS). This comparison was run both on a CPU cluster and a GPU cluster with 3TB TPC-DS data stored on AWS S3. The CPU cluster consisted of 8 instances of m5d.2xlarge as workers and 1 instance of m5d.xlarge as a master. The GPU cluster consisted of 8 instances of g4dn.2xlarge as workers, which has one NVIDIA T4 GPU in each instance (the most cost-effective GPU instances in the cloud for ML) and 1 instance of m5d.xlarge as a master. The CPU cluster costs $3.91 per hour and the GPU cluster costs $6.24 per hour.

In this experiment, the RAPIDS Accelerator team used a query similar to TPC-DS query 97. Query 97 calculates counts of promotional sales and total sales, and their ratio from the web channel for a particular item category and month to customers in a given time zone. You can see from the Spark Physical plan and DAG for query 97 shown below, that every line of the Physical plan has a GPU prefix attached to it, meaning that every operation of that query runs entirely on the GPU.

Spark SQL query 97 DAG

With this query running almost completely on the GPU, processing time was sped up by a factor of up to 2.6x with 39% cost savings compared to running the job on the Spark CPU cluster. Note that there was no tuning, nor code changes for this query.

Improvements in query time and total costs.

In addition, the NVIDIA RAPIDS accelerator team has run queries with Spark windowing operators on EMR and seen speeds up to 30x faster on GPU than CPU on large datasets.

Deep Java Library (DJL) is a Deep Learning Framework written in Java, supporting both training and inference. DJL is built on top of modern Deep Learning engines (TensorFlow, PyTorch, MXNet, etc). It provides a viable solution for users who are interested in Scala/Java or are looking for a solution to integrate DL into their Scala-based big data pipeline. DJL aims to make deep-learning open source tools accessible to developers/data engineers who use primarily Java/Scala by using familiar concepts and intuitive APIs. You can easily use DJL to train your model or deploy a model trained using Python from a variety of engines without any additional conversion.

By combining Spark 3.x, the Rapids Accelerator for Spark and DJL, users can now build an end-to-end GPU accelerated Scala-based big data + DL pipeline using Apache Spark.

Now lets walk through an example using Apache Spark 3.0 with GPU for image classification tasks. This example shows a common Image Classification task on Apache Spark for Online Retail. It can be used to do content filtering like eliminating inappropriate images that merchants have uploaded. The full project is available in the DJL demo repository.

For full setup information, refer to the Gradle project setup. The following section highlights some key components you need to know.

First, well import the Spark dependencies. Spark SQL and ML libraries are used to store and process the images.

Next, we import the DJL-related dependencies. We use DJL API and PyTorch packages, which provide the core DJL features and load a DL engine to run for inference. We also leverage PyTorch-native-cu101 to run on GPU with CUDA 10.1.

1.2 Load model

To load a model in DJL, we provide a URL (e.g., file://, hdfs://, s3://, https://) hosting the model. The model will be downloaded and imported from that URL.

The input type here is a Row in Spark SQL. The output type is a Classification result. We also defined a Translator (not shown in this document) named MyTranslator that deals with preprocessing and post-processing work. The model we load here is a pre-trained PyTorch ResNet18 model from torchvision.

In the main function, we download images and store them into the hdfs. After that, we can create a SparkSession and use the built-in Spark image loading mechanism to load all images into Spark SQL. After this step, we use mapPartition to fetch the GPU information.

As shown in the following, TaskContext.resources()("gpu") stores the assigned GPU for this partition. We can assign the GPU id to the model to load the model on that particular GPU. This step will ensure all GPUs on a single device are properly used. To run inference, run predictor.predict(row).

Next, we run ./gradlew jar to bundle everything we need into a single jar and run it in a Spark cluster.

With EMR release version 6.2.0 and later, you can quickly and easily create scalable and secure clusters with Apache Spark 3.x, the RAPIDS Accelerator, and NVIDIA GPU-powered Amazon EC2 instances. (To set up a cluster using the EMR console follow the instructions in this article. )

To set up a Spark cluster using AWS CLI, create a GPU cluster with three instances using the command below. To run the command successfully, youll need to change myKey to your EC2 pem key name. The --region flag can also be removed if you have that preconfigured in your AWS CLI.

We use the g3s.xlarge instance type for testing purposes. You can choose from a variety of GPU instances that are available in AWS. The total run time for the cluster setup is around 10 to 15 minutes.

Now, we can run the distributed inference job on Spark. You can choose to do it on the EMR console or from the command line.

The following command tells Spark to run a Yarn cluster and setup-script to find GPUs on different devices. The GPU amount per task is set to 0.5, which means that two tasks share one GPU. You may also need to set the CPU number accordingly to ensure they match. For example, if you have an 8-core CPU and you set spark.task.cpus to 2, it means that four tasks can run in parallel on a single machine.

To achieve the best performance, you can set spark.task.resource.gpu.amount to 0.25, which allows four tasks to share the same GPU. This helps to maximize the performance because all cores in the GPU and CPU are used. Without a balanced setup, some cores will be in an idle state, which wastes resources.

This script takes around 4 to 6 minutes to finish, and you will get a printout inference result as output.

DL on Spark is growing rapidly with more applications and toolkits. Users can build their own DL with NVIDIA GPUs for better performance. Please check out the link below for more information about DJL and the Rapids Accelerator for Spark:

Haoxuan Wang is a data scientist and software developer of Barclays, and a community member of DJL (djl.ai). He is keen to building advanced data solutions for the bank by applying innovative ideas. His main technical interest involves natural language processing, graph neural network and distributed system. He was awarded a masters degree (distinction) in data science from University College London (UCL) in 2019.

Qing Lan is a Software Development Engineer who is passionate about Efficient Architectural Design on Morden Software/Application System. Focused on Parallel Computing and Distributed System Design. Currently working on Deep Learning Acceleration and Deep Learning Framework optimization.

Carol McDonald works in technical marketing focusing on Spark and data science. Carol has experience in many roles, including technical marketing, software architecture and development, training, technology evangelism, and developer outreach for companies including: NVIDIA, SUN, and IBM. Carol writes industry architectures, best practices, patterns, prototypes, tutorials, demos, blog posts, whitepapers, and ebooks. She has traveled worldwide, speaking and giving hands-on labs; and has developed complex, mission-critical applications in the banking, health insurance, and telecom industries. Carol holds an MS in computer science from the University of Tennessee and a BS in geology from Vanderbilt University. Carol is fluent in English, French, and German.

See original here:
Accelerating Deep Learning on the JVM with Apache Spark and NVIDIA GPUs - InfoQ.com

Read More..

Review: Bo Burnhams Inside is a successful depiction of a lonely mind – Los Angeles Times

So 2020 happened stuck inside for a year. Many grew and became aware of the person they were. Becoming friends with themselves while others did the quite opposite becoming distanced and acquainted with themselves.

In Bo Burnhams Inside we reaped the fruits of his mind where we find the bare bones of what it means to be a person and how it is ever-changing. Burnham manages to make even the seemingly meaningless topics delve into something bigger. He sings about facetime-ing with your mom, a white womans Instagram, and even the internet as a whole from a carnival barker tone.

This connects to technology, and how being indoors connects you to the outside world but makes you lonelier than ever. Theres also overstimulations lessening of the human experience.

Using music, humor, and intelligence as the forefront but behind all of the knee-slapping content, you are left vulnerable. Theres almost an inability to write about such a raw experience.

Not only were we blessed with Inside but also got a taste of the creative process with clips of setting up the camera and adjusting the lighting only getting more and more impressive. It was visually gorgeous.

Netflix describes Inside as a comedy special that does it a disservice; the only funny thing about it was the irony of how one could convey such an accurate message in a total un #deep manner that Bo has joked so much about.

Bos previous specials make happy and what. culminated in a final number diving deep into the introspection and mental instability that he seems to have perfected for longer, clearer, and less fearful exploitation for much of the second half of Inside.

Inside provides a place of comfort while simultaneously giving you a panic attack whilst watching. The meticulous details such as choosing the correct genre of song that goes with the lyrics.

Emotionally devastating what a privilege it was. It is frightening that a piece of art like this would go unnoticed especially with the lack of featuring and pushing by Netflix. W

hat is remarkable about Bo Burnhams work in Inside is that it presents as a greatest hits album of an artist of 20 years all while feeling current and progressive in the same breath. To make the audience feel comfortable and familiar while challenging in new forms and modalities is worth taking note of; and highly commendable.

So 2020 happened and because of it, we got to see Bo Burnham live up to his potential and create a deranged masterpiece that every artist couldve hoped to create in a time like this.

Related

Read this article:
Review: Bo Burnhams Inside is a successful depiction of a lonely mind - Los Angeles Times

Read More..

AI in Europe: Who’s leading the way and where is it heading? – Siliconrepublic.com

Ireland may be the big adopter of AI in the EU, but a new report from Forrester suggests Europe is still slightly behind other regions.

Artificial intelligence is often slated as the technology that will transform the way we live and do business. But some have embraced it more than others.

Among EU countries, Ireland has the highest share of businesses using AI applications.

Thats according to European Commission data from 2020, which found that 23pc of enterprises in Ireland used any of these four AI applications: analysing big data internally using machine learning; analysing big data using natural language processing, generation or speech recognition; using a chatbot or virtual agent; or using service robots.

Overall, 7pc of enterprises in the EU with at least 10 people employed used at least one of these AI applications in 2020.

Behind Ireland, the countries with the widest uptake of AI tech were Malta (19pc), Finland (12pc) and Denmark (11pc). At the other end of the scale were Cyprus (3pc), Hungary (3pc), Slovenia (3pc) and Latvia (2pc).

A recent report from research and advisory company Forrester said theres a widespread perception that data privacy regulations, ethical concerns and reluctance to adopt cutting-edge tech have resulted in European companies being less advanced in terms of AI adoption that companies in other regions.

A 2020 survey it conducted with responses from data decision-makers in France, Germany and the UK confirmed that there is a lag, but the gap may not be as wide as many perceive it to be.

However, compared with people from other parts of the world, European respondents were less bullish overall about the benefits of AI, according to Forrester.

While 31pc of North American decision-makers surveyed said the benefits of AI were increased automation and improved operational effectiveness, only 28pc of European respondents said the same.

One-third of those in North America said it could also increase revenue growth and improve customer experiences, but only 27pc of those in Europe agreed.

Forrester added that while Europe produces AI excellence, it has trouble scaling start-ups.

Large European companies including Airbus, Bosch, Rolls-Royce and Siemens have been innovating with AI, and Europe has been the birthplace of start-ups such as DeepMind and Featurespace.

However, many start-ups have been acquired by companies outside of the region (with Google snapping up UK-based DeepMind, for example) or have migrated their headquarters to the US.

But the EU is keen to give AI a boost. The European Commission aims to reach an annual investment of 20bn over the course of this decade to help Europe become a global leader in this area of tech. At the same time, it is focusing on making AI ethical and human-centred.

Much like how it took the baton on data protection laws, the European Commission is hoping to set new oversight standards in a bid to create trustworthy AI.

Earlier this year, it outlined a new set of proposals that would classify different AI applications depending on their risks and implement varying degrees of restrictions.

So while other regions may be slightly ahead of Europe when it comes to AI uptake, Forresters report said that European companies are not lagging far behind and the bloc is certainly leading the way in terms of its focus on ethics and trustworthy AI.

Go here to see the original:
AI in Europe: Who's leading the way and where is it heading? - Siliconrepublic.com

Read More..