Page 2,505«..1020..2,5042,5052,5062,507..2,5102,520..»

UK AI Strategy: ‘Openness’ an aim… but what of contracts? – The Register

It has been more than a month since the launch of the UK government's AI Strategy which, the authors said, "represents the start of a step-change for AI in the UK," and The Register, for one, has not forgotten.

While the strategy promises to "embed" supposed British values such as fairness and "openness" in the development and use of AI in the UK, events leading up to its launch, and in particular the behaviour of our government, tell a rather different story, one which could be worrying considering the likely impact of AI on society and the economy.

Some of the moves made by the UK over the first 18 months of the pandemic took place under the cover of emergency legislation, including deals inked by the government with a host of private tech firms in March 2020 to help deliver the NHS COVID-19 response.

One of these was the NHS COVID-19 data store, a project bringing together disparate medical and organisational data from across the national health service, with US spy-tech firm Palantir at the heart of it although Google, Amazon, Microsoft and AI firm Faculty all hold contracts to work on the platform. Planners of the government response team were said to have found it useful, but it also attracted controversy. Then, in December last year, the contract was extended for another two years, again without scrutiny.

In May this year, a broad-based campaign group wrote to (then) UK Health Secretary Matt Hancock (yes, "vast majority" of the UK are "onside" with GP data grab Hancock). The letter called for greater openness around the government's embrace of this gang of private technology vendors. The campaigners soon found they had to threaten court to get the private-sector contracts published after those contracts were awarded without open competition.

Faculty and its CEO, Marc Warner, for one, had no trouble getting close to government circles, where the UK's leaders might be asked to be more mindful of asking private sector players to help them with the business of governance.

According to the testimony of former chief advisor to the Prime Minister, Dominic Cummings, in front of the Health Select Committee, the CEO was present during much of the decision-making in the crucial early stages of the pandemic, when Cummings was still advising the PM.

Reports from The Guardian which Warner would later fail to deny suggested he used his relationship with Cummings to influence Whitehall. "It felt like he was bragging about it," a senior source said, adding Warner would casually tell officials: "Don't worry, I'll text Dom," or "I'm talking to Dom later."

Faculty said Warner wanted to talk to The Register to give his views on the government AI strategy in the week leading up to publication of the policy document, but later he was unable to speak to us. He wasn't the only one. A host of other key private and public figures who'd normally cheerfully provide their take found themselves speechless.

It's fair to say that on Faculty's part, it might not be able to speak to speak to us because of the terms of its contract or due to concerns over commercially sensitive information, we don't know. What we do know is that a 2m Home Office contract was awarded to the firm without competition, for Innovation Law Enforcement (I-LE).

The tender documents offered few details about how AI might be used in law enforcement and when asked, the Home Office simply said: "We are unable to share further information since it's commercially sensitive."

So much for openness.

We are hoping to get more information from the private firm, which one could argue is less duty-bound than our country's leadership to give it to us. We have sent a list of questions via the company's PR firm. Given Faculty's history, and reports about its government contracts, it seems fair to ask, for the sake of openness, how many public-sector contracts it has been awarded and how many of those were awarded after open competition. It did not respond to these questions specifically.

It did, however, provide a statement saying: "Faculty is a highly experienced AI specialist that has delivered work for over 350 clients across 23 sectors of the economy and in 13 countries. We have strong governance procedures in place and all of our contracts with the government are won through the proper processes, in line with procurement rules."

Openness in government contracting is not only a question of fairness. If the UK is serious about developing the nation's industry in AI or indeed any high-tech industry it needs fair and open competition for the billions of taxpayer pounds it spends in the tech market.

Google's AI subsidiary DeepMind was also closely involved in the UK's pandemic response.

DeepMind co-founder Mustafa Suleyman, now veep for AI policy, was reportedly approached by NHSX to help work with patient data, including discussing whether Google's cloud products were suitable for its data store project. In his role as chief advisor to the prime minister, Dominic Cummings brought Demis Hassabis, CEO and co-founder of DeepMind, into the heart of government decision-making, according to his select committee testimony [PDF].

What's at stake when emergency contracts not just to Palantir and Google and the like, but to many other vendors during the pandemic escape scrutiny or circumvent the usual bidding and tendering process?

Peter Smith, former president of the Chartered Institute of Purchasing and Supply, told The Register that studies of countries including South Africa had shown that favouritism and nepotism in public procurement means suppliers can tend to either withdraw from the market or cut investment in technology, products and services, and instead put the money into employing an ex-minister as a non-exec or as an advisor, and wining and dining special government officials.

He went on to say that the recent spate of stories about a lack of openness in government contracts could damage how the UK is seen as a place to invest.

"We're in danger of moving from a country where we felt public procurement was in the upper quartile in the world, to a place where we're slipping down the league table," said Smith, who works as a consultant, having held senior roles in the public and private sector.

The picture in public procurement could then cut against government ambitions in AI and it is not just Faculty that has a close relationship with the government and is involved with the government AI strategy. As mentioned, Google was part of the group on the NHS COVID-19 data store deal, and again this required the pressure of legal letters to have it aired in the public domain.

DeepMind got prime spot on the press release for the UK AI Strategy, under the banner of a "new 10-year plan to make the UK a global AI superpower."

"AI could deliver transformational benefits for the UK and the world accelerating discoveries in science and unlocking progress," Hassabis said in the pre-canned publicity material.

Part of the UK's vision for its AI strategy is an industry "with clear rules [and] applied ethical principles."

But Google, DeepMind's parent company, has found it difficult to get out of the AI ethics quagmire.

A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind in breach of data protection laws. Mishcon de Reya launched the legal action in September 2021, saying it plans a representative action on behalf of Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.

DeepMind worked with Google and the Royal Free London NHS Foundation Trust under an arrangement formed in 2015. In 2017, Google's use of medical records from the hospital's patients to test a software algorithm was deemed legally "inappropriate" by Dame Fiona Caldicott, National Data Guardian at the Department of Health.

Law firm Linklaters carried out a third party audit on the data processing arrangement between Royal Free and DeepMind, and concluded their approach was lawful.

At the same time, former co-lead of the Chocolate Factory's "ethical artificial intelligence team" Timnit Gebru left under controversial circumstances in December last year after her managers asked her to either withdraw an as-yet-unpublished paper, or remove the names of employees from the paper.

In her time since leaving the search giant, Gebru has marked out a stance on AI ethics. In a recent interview with Bloomberg, she said labour and whistleblower protection was the "baseline" in terms of making sure AI was fair in its application.

"Anything we do without that kind of protection is fundamentally going to be superficial, because the moment you push a little bit, the company's going to come down hard," she said.

Among the lost list of organisations and companies adding their names to the UK government's AI Strategy, who would back her stance?

We asked DeepMind, Benevolent AI CEO and co-chair of Global Partnership on Artificial Intelligence Joanna Shields, Alan Turing Institute professor Sir Adrian Smith, CEO of Tech Nation Gerard Grech, president of techUK Jacqueline de Rojas, and Nvidia veep David Hogan if they had thoughts on the issue.

None of them responded to the specific point, although we have included the responses we did receive in the box below.

While the UK has legal whistle-blower protection in certain scenarios, it only applies to law-breaking, damage to the environment and the health and safety of individuals. Where the law is unclear on AI it is uncertain what protection whistleblowers might get.

Meanwhile, proposals from the Home Office suggest a public interest defence for whistleblowing might be removed.

On the questions of AI ethics, the focus has been on data. Historic data created by humans in a particular social context can, when used for training AI and ML, lead to biased results, as in the case of a sexist AI recruitment tool which Amazon scrapped shortly after its introduction.

An industry has developed around these questions, with vendors offering tools to scan for biases in data and illuminate data which can be proxies for race, such as postal codes, for example.

But for some, the problem of AI ethics runs deeper than merely the training data. A paper shared by former Google ethics expert Gebru on Twitter found that far from considering the wider societal impact of their work, a sample of 100 influential machine learning papers define and apply values supporting the centralisation of power.

"Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities," the paper said [PDF].

Speaking to The Register, paper co-author Ravit Dotan, a postdoctoral researcher at the Center for the Philosophy of Science at the University of Pittsburgh, said the point of the study was to see the value behind ML research and the researcher's motivations.

"Who is the target, the beneficiary? Is it people within the discipline or is it a broader community? Or is it Big Tech? We wanted to see how authors intend to satisfy [that target]. We also wanted to understand the funding structure better," she said.

The paper also looked at whether ML researchers considered the negative consequences of their work. The vast majority did not. "It was very rare to see any kind of work addressing of potential negative consequences, even in papers that you really would expect it, such as those looking at the manipulation of videos," Dotan said.

In a world where deepfake porn is prompting those whose likenesses have been stolen (mostly women) to fight for tighter regulation, the negative consequences of image manipulation seem all too evident.

In her interview with Bloomberg, Gebru also called for the regulation of AI companies. "Government agencies' jobs should be expanded to investigate and audit these companies, and there should be standards that have to be followed if you're going to use AI in high-stakes scenarios," she said.

But the UK's AI strategy is vague on regulation.

Although it acknowledges trends like deepfakes and AI-driven misinformation might be risks, it promises only to "publish a set of quantitative indicators... to provide transparency on our progress and to hold ourselves to account."

It promises that "the UK public sector will lead the way by setting an example for the safe and ethical deployment of AI through how it governs its own use of the technology."

It adds that the UK will "seek to engage early with countries on AI governance, to promote open society values and defend human rights.

"Having exited the EU, we have the opportunity to build on our world-leading regulatory regime by setting out a pro-innovation approach, one that drives prosperity and builds trust in the use of AI.

"We will consider what outcomes we want to achieve and how best to realise them, across existing regulators' remits and consider the role that standards, assurance, and international engagement plays."

One existing regulator, the Information Commissioner's Office, is already engaged with proposed changes to data protection law following the UK's departure from the EU. The government review has provoked alarm as it proposes watering down individuals' rights to challenge decisions made about them by AI.

Meanwhile, the UK has published guidance on AI ethics in the public sector, developed by the Alan Turing Institute, an AI body formed by five leading UK universities. This was followed by the government's Ethics, Transparency and Accountability Framework for Automated Decision-Making, launched in May 2021.

Critics might argue that guidance and frameworks do not amount to law and remain untested. The government has promised to publish a White Paper or policy document on governing and regulating AI next year.

A government spokesperson sent us a statement after initially only wanting to brief The Reg on background:

"We are committed to ensuring AI is developed in a responsible way. We have published extensive guidance on how firms can use the technology ethically and transparently and issued guidance so workers in the field can report wrongdoing while retaining their employment protections. We are also going to publish a White Paper on governing and regulating AI as part of our new national AI Strategy."

In the launch of the AI Strategy, business secretary Kwasi Kwarteng described his desire to "supercharge our already admirable starting position" in AI. But it will take more than words to convince the wider world. Observers will want to see more openness in public-sector contracting and in the government's approach to AI ethics to back up the government's ambition.

View post:
UK AI Strategy: 'Openness' an aim... but what of contracts? - The Register

Read More..

Biggest AI Innovations And Milestones Of 2021 – Analytics India Magazine

Remember to celebrate milestones as you prepare for the road ahead Philanthropist Nelson Mandela

Artificial Intelligence, or AI, is an ever-evolving field. And the occasional failures here and there should not stop us from hyping the great advancements. In the last year and half, despite the global crisis in some cases, because of the global crisis scientists, researchers and developers have made insane contributions, innovated and have reached unprecedented milestones in the field of AI.

As we head closer to the end of 2021, Analytics India Magazine takes a look back at the year that was, and the AI innovations and milestones of this year that made it to the headlines.

Early this year Facebook AI developed SEER (SElf-supERvised) a billion-parameter self-supervised computer vision model. The model can learn from any random group of images on the internet, without having to carefully curate and label the images, which is otherwise a prerequisite for computer vision training. So far, the team at Facebook AI has tested SEER on one billion images uncurated and unlabelled, on publicly available Instagram images. Reportedly, it performed better than most advanced self-supervised systems. This breakthrough clears the path for flexible, accurate and adaptable computer vision models for the future.

Googles parent company Alphabet introduced Isomorphic Lab, in an attempt to accelerate the discovery of new drugs using AI. Its subsidiary DeepMinds founder and CEO, DEmis Hassabis, announced the creation of the lab to ultimately find cues for some of humanitys Most devastating diseases.

Alphabet plans to develop a computational platform to better understand the biological systems and find ways to treat diseases. Although separate, DeepMind and Isomorphic intend to occasionally collaborate to build off the research, discoveries and protein structure work.

Earlier this year, Google Cloud announced the availability of Vertex AI at the Google I/O event. It is a managed machine learning (ML) platform for the deployment and maintenance of AI models. Vertex AI brings AutoML and AI platforms together into an unified API, and can be used to build, deploy and scale ML models faster. Deployed by Google Research, Vertex AI required 80 per cent fewer lines of code for custom modeling. It integrated with open-source frameworks including TensorFlow, PyTorch and Scikit-learn.

Microsoft developed a large scale pre-trained model for symbolic music understanding MusicBERT. The model can understand music from symbolic data, that is, in MIDI format and not audio; and then indulge into genre classification, emotion classification, and music pieces matching. The tech giant used OctupleMIDI method, bar-level masking strategy and large-scale symbolic music corpus of more than one million music tracks.

MusicBERT achieves state of the art performance on music understanding tasks and going ahead, the team at Microsoft attempts at applying the model on tasks including structure analysis and chord recognition.

AI company OpenAI and Microsoft collaborated to launch AI programmer Copilot in July this year. Based on OpenAI Codex, the new AI system is trained on open-source code, contextualising a situation using docustrings, function names, preceding code and comments to determine and generate the most relevant code.

The GitHub Copilot is trained on billions of lines of public code, putting the knowledge one needs at their fingertips while saving ones time and helping them stay focused. It works on a broad set of frameworks as well as languages including TypeScript, Ruby, Java, Go and Python.

Google developed and launched TensorFlow 3D a modular library to bring 3D deep learning capabilities to TensorFlow, earlier this year. This latest upgrade gives access to sets of operations, loss functions, models for the development, training and deployment of 3D scene understanding models, and data processing tools and metrics.TensorFlow 3D supports datasets including Waymo Open, Rio and ScanNet. It supports three pipelines 3D Semantic Segmentation, 3D Instance Segmentation and 3D Object Detection.

After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at debolina.biswas@analyticsindiamag.com

Read more:
Biggest AI Innovations And Milestones Of 2021 - Analytics India Magazine

Read More..

Zen and the Art of Entrepreneurship – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

One of the most important books Ive read is called Zen Mind, Beginners Mind by Shunryu Suzuki. Ill give you a short summary of the book, from the book: In the beginners mind there are many possibilities, but in the experts there are few. The entire goal of Zen according to Suzuki is to remain a perpetual beginner. Zen is all about living (actions) and less about life (ideas/interpretations about living). So, why should someone strive for the perspective of the beginner? Beginners are eager, inexperienced, patient, curious, creative, and open-minded. They arent yet stuck in habitual patterns; they are free in ways that the experienced person is not. When you started your business, you were a beginner too.

Related:7 Proven WaysMeditatingPrepares You for Success

Entre means to enter, begin,start. Pre means before. Neur is related to nerves, as in neurological. Essentially, the word entrepreneur means to enter something before you allow your mind to make you nervous; to dive in with both feet before you second guess yourself. To be an entrepreneur is to be living in a state of Zen. The word entre-pre-neur broken down in this way is, in essence, the goal of Zen. Whatever state of mind you were in when you had the audacity to think your idea was good enough to succeed was a state of Zen. To live in a state of Zen is to deny the mind its interpretations of reality and simply live reality, from moment to moment. It sounds simple, but it isnt easy. Over time, the same Zen state of mind which was the creative impetus to start the business has been altered by a whole mess of problems related to being a business owner. The honeymoon period is over, and the excitement slowly fades. It has officially become work.

We shall not cease from exploration

And the end of all our exploring

Will be to arrive where we started

And know the place for the first time.

from "Little Gidding" by T.S. Eliot

Beginners take nothing for granted, while experienced people tend to overlook everything which has become ordinary. I bet there is a stapler or something at your desk which has been there for years and you dont even notice it. Every day you look at it, but you dont actually see it. The word discriminate means to notice differences. The human mind is built to notice differences, so the simple fact that the stapler is always there is why you stop taking notice of it. However, if someone were to move it, suddenly youd notice. We take so much for granted simply because it becomes routine and ordinary. When we get used to doing things in a certain way, stagnation occurs. Stagnation is the enemy of creative thinking.

We all see life through a lens that is colored by our experiences. Significant experiences (or lack thereof) both positive and negative change the way we perceive reality. We all have a subjective experience of an objective reality, and no one is free of this. One definition of enlightenment is simply seeing things just as they are, without bringing our experiences and interpretations. One metaphor for the practice of meditation is a bathroom mirror collecting dust. Day in and day out, dust collects on the mirror. When the mirror gets too dusty it starts to distort reality. Meditation is the process of cleaning the mirror every day so that we can see everything the mirror reflects as it really is. Experiences can accumulate just like dust on the mirror. They can serve us or cause clutter in our minds, or both. If you are an entrepreneur, a business owner, or an aspiring entrepreneur, I would argue that you are already living in a state of Zen.

Related:WhyMeditationWorks Wonders for Better Workplace Productivity

A brand new mirror hasnt had time to collect dust. This is why the newest member of your team may have the most important perspective. Beginners dont know enough to be nervous about all that could go wrong. They take nothing for granted and they see things from a different perspective. Wait about two weeks for them to get acclimated and meet personally with each new person at your company. It will be well worth your time. Furthermore, they will be honored by the opportunity and will feel valued. As you are speaking with them, simply ask them what theyve noticed so far and if there are any changes theyd make to the way things are done, set up, systems, etc. Encourage them to speak freely and remind them that you are asking to make sure things havent gotten stagnant. I think youll be surprised at the perspective they bring and the information they provide.

How does one change their perspective, even momentarily? Well, the solution is probably what youve all been guessing. Get yourself a rubber band. Get yourself a rubber band and put it on the wrist of your non-dominant hand. Throughout the day, when you see the rubber band and are conscious of it, simply pause. Let the rubber band serve as a reminder of your new goals. If your goal is to gain the perspective of a beginner, simply allow your focus to shift and pretend you know nothing. Pretend it's your very first day at work, and youre still a bit wet behind the ears. The next time you walk to your desk, look at everything. Take the time to stop and look at whats around you and youll notice all the things which normally go unnoticed. This perspective shift may seem like a waste of precious time, but it will prove immensely valuable to your creativity in regard to solving problems and combating the stagnation we are all subject to.

Related:Why GuidedMeditationis Essential For Every Entrepreneur

If your goal is a practical application of meditation, pause when you see your rubber band and take three slow, deep (diaphragmatic) breaths. Youll soon realize that there is always time for three breaths. The world wont stop spinning if we stop to take three breaths. Problems wont actually pile up in the 10 seconds it takes to consciously breathe three times. Ive taught meditation to beginners and the most common problems I hear are the following: It just doesnt work for me or I dont have enough time. The problem was almost always the same when I asked them to elaborate on their experience. They downloaded an app and hoped for the best, or they set unrealistic expectations for themselves. Imagine going in for surgery from a surgeon who was self-taught, using an app. Youd be terrified! If a personal trainer wanted to start you out by running half marathons, youd never go back. Yet, we attempt to sit and meditate for 10 or 20 minutes, in an uninterrupted way, on the first try. We expect something to happen or to clear our minds. Thats just not how meditation works. It takes years to sit and meditate in a relatively uninterrupted manner. In this case, the expectations dictate reality, and the results are rather undesirable. I was taught using this rubber band method and not only was it successful but I have since used it to change other habits. I was wearing a rubber band on my wrist for over a year before I actually started to sit down and meditate. I was meditating all along, but doing it throughout the day in a more manageable way. Several years later I was teaching people how to meditate. So, whether youve never tried to meditate or if youve tried and failed, try again. This time you will go into the practice equipped with the practical tips and more realistic expectations Ive outlined in this article. And remember: Meditation is not about achieving anything. There are no goals. The experience itself is the goal. If youre doing it right, the more you learn, the more youll feel like a beginner. Oh, and Id highly recommend you wear the rubber band in the shower or you might forget to put it back on.

Go here to see the original:
Zen and the Art of Entrepreneurship - Entrepreneur

Read More..

Debunking The Four Most Common Data Science Myths – Influencive

Every business, regardless of its size, collects data. Whether it is financial data, HR data, traffic data, or sales data, modern businesses using digital tools cannot avoid gathering mountains of data.

The problem with business data is that few businesses use it to its fullest potential. Buried in each companys data vaults are clues to making better decisions, identifying opportunities, and optimizing the outcomes of whatever business they do. To uncover and unravel those clues, businesses must engage in the field of data science.

Data science is no longer a nice-to-have or an expensive experiment for businesses, says Jan Maly, Data Science Lead at STRV. Its vital for gaining a competitive edge today. AI is now attainable, affordable and, most importantly, a necessity for almost all businesses.

STRV is a software design and engineering team with nearly 20 years of experience in developing digital products that help companies unlock business opportunities. STRV believes that there are four data science myths that can keep companies from embracing the power of data science.

Obviously, doing the work of data science will cost companies something. At the least, companies will need to make room in the budget to obtain or develop software that can tame data and extract understanding. However, when the impact of applied data science is understood, those expenses can be better seen as investments that lead to increased efficiency, effectiveness, and sales. The understanding gained from data science allows companies to automate processes, increase speed, and mitigate human errors, all of which save companies money.

For most retail businesses, product descriptions provide a wealth of data. Utilizing that data to categorize products can make it easier for customers to find what they want or for businesses to make suggestions about related items.

An AI solution provided by STRV allowed a company to use its available product data to categorize 30,000 types of shoes with 96 percent accuracy and a 20 millisecond per item processing time. The project was completed 500 times faster than it could have been if managed manually. Combining AI and data science decreases the cost while increasing the return on investment.

Because most science deals with natural processes that cannot be rushed or manipulated, it is not wrong to think that good science takes time. Businesses, especially businesses trying to solve problems, typically do not have a lot of time. Addressing problems with data science can seem like a luxury that your business cannot afford.

Data science is different. Data moves at the speed of light and the technology and methods for mining and understanding data, once developed, can be widely applied. STRV approaches data science projects by first developing a Proof of Concept (POC) to validate that the problem can be solved with the data that is available. By committing to get to a POC conclusion quickly, STRV allows for the entire timeline for data science solutions to be greatly reduced.

STRV has undertaken major projects for companies including Songclip, Cinnamon, and AllVoices. Even with projects that involve cutting edge technology and demand a high degree of efficiency and accuracy, the POC phase of the process has rarely taken more than one month.

In the case of Soncglips, it took STRV only four weeks to build the entire solution for mapping clips with lyrics. That solution ultimately empowered the company to increase utilization of its database of clips from 4 percent to 100 percent without adding extra workforce. When data science is done correctly, it can provide solutions on a schedule that works for any business.

Data science is not the science of tomorrow. It is a key tool being used by companies today to gain a competitive edge. There was a time when a mobile app or fancy user interface was enough to differentiate your company. Now those things are the norm. Data science makes companies smarter and better equipped to deliver a five-star customer experience.

While every company has data, successful companies are those who are building their business around that data, applying data science, and using AI as the core driver of competitiveness and success.

There are some obvious examples of companies that are benefitting from data science, such as ecommerce and online content companies. However, when AI is introduced to the equation, virtually any company can benefit from data science.

Regardless of the business that you conduct, if your company needs to make informed decisions, motivate employees, develop and adhere to best practices, explore new business opportunities, and identify target audiences, data science can help your business to succeed.

Published November 6th, 2021

See more here:

Debunking The Four Most Common Data Science Myths - Influencive

Read More..

The Role of Women in Scalping up AI and Data Science – Analytics Insight

Women are the key piece to the puzzle of realizing the highest maturity levels of digital enterprises, but unless we realize this, our progress in AI and technology will remain stagnant. To close the gender gap in science, technology, engineering, and math (STEM) and to accelerate advances in artificial intelligence and the sciences, we must encourage and support women on all levels, from the government to enterprise and establish equal employment opportunities for all.

Women make up a fraction of the artificial intelligence workforce, whether in the form of research and development or as employees at technology inclined firms. According to the World Economic Forum, Non-homogeneous teams are more capable than homogenous teams of recognizing their biases and solving issues when interpreting data, testing solutions or making decisions. In other words, diverse teams and especially those that emphasize women at their epicenter, are a necessary provision for enterprises to adopt, build, realize and accelerate enterprise AI maturity levels. At present, unfortunately, few enterprises understand the criticality of women to boost AI maturity levels.

STEM, data science, and AI fields experience a lack of female role models. Without female role models for girls to look up to, it becomes difficult for young women to envision future careers in science, technology, and engineering fields. A 2018 Microsoft survey shows that female STEM role models boost the interest of girls in STEM careers from 32 percent to 52 percent. Therefore, we must showcase the achievements of women in the sciences and engineering across the world to capture the attention of females everywhere.

One of the biggest pressures that females face in STEM careers is cutthroat competition amongst male counterparts and the toxic workplace culture that it creates. An HBR article found that three-fourths of female scientists support one another in their workplace to ease tensions. Moreover, women are likely to be demoted as inferior by men holding equivalent positions, whether those jobs are in engineering, data science, or AI. All of these factors contribute to females swiftly dismissing STEM jobs to avoid such disquieting workplace circumstances.

According to a survey conducted by BCG, when it comes to STEM, Women place a higher premium on applied, impact-driven work than men do: 67% of women expressed a clear preference for such work, compared with 61% of men. This finding highlights a significant fact: women are vastly more likely to pursue STEM roles that provide them with meaning, purpose and produce impactful results, but many women dont perceive this purpose and impact in STEM jobs. Therefore, without a clear high impact-driven pathway insight, females tend to turn their heads on STEM, data science, and AI-related careers.

Studies have shown that communication is of the utmost importance when it comes to getting more women involved in STEM careers. According to BCG GAMMA, just 55% of women feel like they know enough about employment opportunities in data science. Furthermore, vague explanations of job qualifications, such as being strong in data science, and, conversely, incredibly in-depth job descriptions in search of data wizard talent, tend to steer females clear of STEM-related jobs. Moreover, an HBR study found that female engagement with STEM employers falls far behind men and that this should come as no surprise as, Given the selection bias that accompanies personal work networks, especially in a young and still male-dominated field.

It isnt enough to pique the interest of girls and young women to pursue STEM careers: the goal is to maintain, foster, and grow that interest. A study published in the Social Forces journal found that women in STEM are much more likely to abandon their jobs than if they held other careers. More precisely, the study highlights that some 50% of women holding STEM careers left after 12 years on the job, whereas that number dropped to 20% for women in other fields. On average, females tend to distance themselves from STEM after 5 years of industry involvement. But why? According to the same study, Women with engineering degrees said they left engineering because of lack of advancement or low salary, along with other working conditions. These facts show that retention of women in the STEM, data science and AI workforce is chief among challenges to address.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Here is the original post:

The Role of Women in Scalping up AI and Data Science - Analytics Insight

Read More..

Olive Partners with ClosedLoop to Improve Care and Reduce Financial Risk for Patients – Yahoo Finance

Company continues rapid expansion of The Library with the addition of ClosedLoop

U.S.A., Nov. 10, 2021 /PRNewswire/ -- Olive, the automation company creating the Internet of Healthcare, today announced a partnership with ClosedLoop, a healthcare data science platform that makes it easy for healthcare organizations to use AI to improve outcomes and reduce costs. ClosedLoop has also joined The Library, a first-of-its-kind universal marketplace for healthcare solutions, to provide AI-enabled predictive analytics to deliver better patient outcomes, such as reducing unplanned hospitalizations, readmission rates and hospital-acquired infections.

(PRNewsfoto/Olive)

Hospitals and health systems are exploring the use of predictive analytics, often linked to quality measures and financial incentives, to identify patients at risk for undesirable outcomes such as sepsis, 30-day readmissions, and preventable emergency department visits. Previous generations of analytics tools lack the precision to efficiently identify patients for targeted interventions, the transparency to build clinician trust and drive adoption, and the ability to customize algorithms to each organization's specific population mix and available data sources.

The ClosedLoop platform enables healthcare organizations to rapidly train and deploy customized predictive machine learning models that accurately predict risk for a wide variety of selected outcomes at the individual patient level, transparently explain which factors contribute to an individual patient's predicted risk, and allow monitoring of performance over time for continuous learning. By deploying ClosedLoop's patient health forecasts as a Loop via Olive Helps, clinicians will have powerful, individualized insights delivered within clinical workflows, ensuring that critical information is available when and where they need it.

"ClosedLoop and Olive are both striving to radically improve healthcare through the use of artificial intelligence," said Andrew Eye, CEO, ClosedLoop. "Together, ClosedLoop and Olive will propel AI-powered patient health forecasts to clinicians and providers, helping them unlock valuable insights to provide life-saving care for patients."

Story continues

ClosedLoop will enable hospitals that have already implemented Olive Helps to forecast patient health, and provide clinicians with the ability to identify and intervene with at-risk patients. Clinicians using ClosedLoop with Olive Helps can:

See highly individual, patient-level predictions of risk for preventable adverse outcomes, while focusing more attention on patients who are identified as particularly high-risk;

Understand patient-level factors contributing to future risk, while visualizing historical risk trends; and

Select clinical and non-clinical targeted interventions most likely to address each patient's individually identified risk factors.

Additionally, analytics teams using ClosedLoop with Olive Helps can:

Train highly accurate models customized to their organization's specific population mix and available data sources;

Select from a wide variety of model templates to create predictive models for use cases of highest priority across different needs within their organization; and

Rapidly train, validate, and deploy predictive models to clinical workflows within Olive Helps.

"Olive and ClosedLoop both aim to help healthcare organizations improve patient outcomes and reduce costs through innovative technology," said Patrick Jones, executive vice president, partnerships, Olive. "As Olive continues creating the Internet of Healthcare, our partnership with ClosedLoop will help clinicians harness the power of AI and automation to make better decisions, while identifying, intervening and better caring for the most-at-risk patients."

For more information about Olive's Partner Programs, including The Library, visit oliveai.com.

About ClosedLoopClosedLoop.ai is healthcare's data science platform. We make it easy for healthcare organizations to use AI to improve outcomes and reduce costs. Purpose-built and dedicated to healthcare, ClosedLoop combines an intuitive end-to-end machine learning platform with a comprehensive library of healthcare-specific features and model templates. Customers use ClosedLoop's Explainable AI to drive clinical excellence, operational efficiency, value-based contracts, and enhanced revenue. Winner of the CMS AI Health Outcomes Challenge and named a KLAS Healthcare AI Top Performer for 2020, ClosedLoop is headquartered in Austin, Texas.

About OliveOlive is the automation company creating the Internet of Healthcare. The company is addressing healthcare's most burdensome issues through automation delivering hospitals, health systems and payers increased revenue, reduced costs, and increased capacity. People feel lost in the system today and healthcare employees are essentially working in the dark due to outdated technology that creates a lack of shared knowledge and siloed data. Olive is driving connections to shine new light on healthcare processes, improving operations today so everyone can benefit from a healthier industry tomorrow. To learn more about Olive, visit oliveai.com/

Media ContactRachel Forsyth312-329-3982media@oliveai.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/olive-partners-with-closedloop-to-improve-care-and-reduce-financial-risk-for-patients-301420664.html

SOURCE Olive

The rest is here:

Olive Partners with ClosedLoop to Improve Care and Reduce Financial Risk for Patients - Yahoo Finance

Read More..

The Question Weve Stopped Asking About Teen-Agers and Social Media – The New Yorker

The trouble started in mid-September, when the Wall Street Journal published an expos titled Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. The article revealed that Facebook had identified disturbing information about the impact of their Instagram service on young users. It cited an internal company presentation, leaked to the paper by an anonymous whistle-blower, that included a slide claiming that thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. Another slide offered a blunter conclusion: Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.

These revelations sparked a media firestorm. Instagram Is Even Worse Than We Thought for Kids, announced a Washington Post article published in the days following the Journals scoop. Its Not Just Teenage GirlsInstagram Is Toxic for Everyone, claimed an op-ed in the Boston Globe. Zuckerbergs public comments about his platforms effects on mental health appear to be at odds with Facebooks internal findings, noted the New York Post. In a defiant post published on his Facebook account, Mark Zuckerberg pushed back, stating that the motives of his company were misrepresented. The very fact that Facebook was conducting this research, he wrote, implies that the company cares about the health impact of its products. Zuckerberg also pointed to data, included in the leaked slides, that showed how, in eleven out of the twelve areas of concern that were studied (such as loneliness and eating issues), more teen-age girls said that Instagram helped rather than hurt. In the background, however, the company paused work on a new Instagram Kids service.

These corporate responses werent enough to stem the criticism. In early October, the whistle-blower went public in an interview on 60 Minutes, revealing herself to be Frances Haugen, a data scientist who had worked for Facebook on issues surrounding democracy and misinformation. Two days later, Haugen testified for more than three hours before a Senate subcommittee, arguing that Facebooks focus on growth over safeguards had resulted in more division, more harm, more lies, more threats, and more combat. In a rare moment of bipartisanship, Democrat and Republican members of the subcommittee seemed to agree that these social-media platforms were a problem. Every part of the country has the harms that are inflicted by Facebook and Instagram, the subcommittee chair, Senator Richard Blumenthal of Connecticut, stated in a press conference following Haugens testimony.

This is far from the first time that Facebook has faced scrutiny. What struck me about this particular pile-on, however, was less its tonewhich was near-uniformly negativethan what was missing. The commentary reacting to the Journals scoop was quick to demand punishment and constraints on Facebook. In many cases, the writers seethed with frustration about the lack of such retribution enacted to date. Both Democrats and Republicans have lambasted Facebook for years, amid polls showing the company is deeply unpopular with much of the public, noted a representative article from the Washington Post. Despite that, little has been done to bring the company to heel. Whats largely absent from the discussion, however, is any consideration of what is arguably the most natural response to the leaks about Instagrams potential harm: Should kids be using these services at all?

There was a moment in 2018, in the early stages of the Cambridge Analytica scandal, when the hashtag #DeleteFacebook began to trend. Quitting the service became a rational response to the growing litany of accusations that Facebook faced, such as engineered addiction, privacy violations, and its role in manipulating civic life. But the hashtag soon lost momentum, and the appetite for walking away from social media diminished. Big-swing Zeitgeist articlessuch as a 2017 Atlantic story that asked Have Smartphones Destroyed a Generation?gave way to smaller policy-focussed polemics about arcane regulatory responses and the nuances of content-moderation strategies. This cultural shift has helped Facebook. The reality is that young people use technology. Think about how many school-age kids have phones, Zuckerberg wrote in his post responding to the latest scandal. Rather than ignoring this, technology companies should build experiences that meet their needs while also keeping them safe. Many of the politicians and pundits responding to the Facebook leaks implicitly accept Zuckerbergs premise that these tools are here to stay, and all thats left is to argue about how they operate.

Im not sure, however, that we should be so quick to give up on interrogating the necessity of these technologies in our lives, especially when they impact the well-being of our children. In an attempt to keep this part of the conversation alive, I reached out to four academic expertsselected from both sides of the ongoing debate about the harm caused by these platformsand asked them, with little preamble or instruction, the question missing from so much of the recent coverage of the Facebook revelations: Should teen-agers use social media? I wasnt expecting a consensus response, but I thought it was important, at the very least, to define the boundaries of the current landscape of expert opinion on this critical issue.

I started with the social psychologist Jonathan Haidt, who has emerged in recent years, in both academic and public circles, as one of the more prominent advocates for issues surrounding social media and teen-age mental health. In his response to my blunt question, Haidt drew a nuanced distinction between communication technology and social media. Connecting directly with friends is great, he told me. Texting, Zoom, FaceTime, and Snapchat are not so bad. His real concern were platforms that are specifically engineered to keep the childs eyes glued to the screen for as long as possible in a never-ending stream of social comparison and validation-seeking from strangersplatforms that see the user as the product, not the customer. How did we ever let Instagram and TikTok become a large part of the lives of so many eleven-year-olds? he asked.

I also talked to Adam Alter, a marketing professor at N.Y.U.s Stern School of Business, who was thrown into the social-media debate by the publication of his fortuitously timed 2017 book, Irresistible, which explored the mechanisms of addictive digital products. Theres more than one way to answer this question, and most of those point to no, he answered. Alter said that he has delivered this same prompt to hundreds of parents and that none of them seem happy that their teens use social media. Many of the teens he spoke with have confirmed a similar unease. Alter argued that we shouldnt dismiss these self-reports: If they feel unhappy and can express that unhappiness, even that alone suggests the problem is worth taking seriously. He went on to add that these issues are not necessarily easy to solve. He expressed worry, for example, about the difficulty of trying to move a teen-ager away from social media if most of their peers are using these platforms to organize their social lives.

On the more skeptical side of the debate about the potential harm to teen-agers is Laurence Steinberg, a psychology professor at Temple University and one of the worlds leading experts on adolescence. In the aftermath of Haugens Senate testimony, Steinberg published an Op-Ed in the Times that argued that the research linking services like Instagram to harm is still underdeveloped, and that we should be cautious about relying on intuition. Psychological research has repeatedly shown that we often dont understand ourselves as well as we think we do, he wrote. In answering my question, Steinberg underscored his frustration with claims that he thinks are out ahead of what the data support. People are certain that social media use must be harmful, he told me. But history is full of examples of things that people were absolutely sure of that science proved wrong. After all, people were certain that the world was flat.

Continued here:

The Question Weve Stopped Asking About Teen-Agers and Social Media - The New Yorker

Read More..

Knowland Releases First-of-Its-Kind Future Event Activity Forecast – PRNewswire

The MRF demonstrates projected industry recovery patterns and is based on Knowland's proprietary data and regression models leveraging almost 20 million global events over the last 15 years. The forecast uses a natural recovery model, assuming historic seasonal patterns without major market disruption, to index the recapture of meeting activity compared to baseline levels from 2019. By comparing past data to evolving data trends, hoteliers can better understand relevant changes and their implications as the market moves into 2022 and beyond.

JeffBzdawka, chief executive officer, Knowland, said: "Knowland's Meetings Recovery Forecast model is the foundation for future predictive forecasting. It applies the intelligence of machine learning to Knowland's expanding meetings and events database to generate thoughtful, actionable AI-driven insights for hotels on the regional, local and even property levels."

Kristi White, chief product officer, Knowland, said: "As we continue to increase our data sources, we have an even better view of the potential recovery path for hoteliers. Data science allows us to compare years of historical seasonal velocity to our latest data models to help hoteliers understand how to move forward into a more profitable future. The Meetings Recovery Forecast offers the hospitality industry guidance on when to start rebuilding sales staff, how to plan for upcoming seasonal variances, and basically when to turn your re-vamped sales engine back on."

ABOUT KNOWLANDKnowlandis the world's leading provider of data-as-a-service insights on meetings and events for hospitality. With the industry's largest historical database of actualized events, thousands of customers trust Knowland to sell group smarter and maximize their revenue. Knowland operates globally and is headquartered just outside Washington, DC. To learn more about our solutions, visit http://www.knowland.com or follow us on Twitter @knowlandgroup.

Press Contact:Kim Dearborn [emailprotected] 909.455.4316

SOURCE Knowland

Home

Read this article:

Knowland Releases First-of-Its-Kind Future Event Activity Forecast - PRNewswire

Read More..

Do You Want To Deploy Responsible AI In Your Organization? Join This Session To Operationalize Responsible AI – Analytics India Magazine

As AI adoption increases across industries, the emphasis has shifted heavily to developing and deploying ethical, responsible AI applications.

Responsible Artificial Intelligence is a positive force. According to Gartner, Responsible AI encompasses several aspects of making the right decisions when adopting AI, aspects that are often addressed independently by organizations. A responsible AI framework focuses on bias detection, privacy, governance, and explainability to help organizations harness the power of AI.

Nevertheless, the practical implications of Responsible AI are unclear. Can this be applied across industries and domains? How can we responsibly deploy AI?

During this complimentary fireside, we will discuss the most critical aspects of responsible AI.

To address these queries and more, Tredence is conducting a Fireside Chat session with dignitaries like Professor Balaraman Ravindran, Head, Robert Bosch Centre for Data Science and AI, Professor at IIT Madras; Soumendra Mohanty, Chief Strategy Officer & Chief Innovation Officer at Tredence Inc.; and Aravind Chandramouli, Head of AI CoE at Tredence Inc. The session would be conducted around the theme of Responsible AI: Decode, Contextualise and Operationalise.

This session is designed to help you learn best practices & techniques for driving Responsible AI in your organization, achieve fairness in AI deployment and gain customer trust.

Prof Ravindran is the head of the Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) at IIT Madras and a professor in the Department of Computer Science and Engineering. He is also the co-director of the reconfigurable and intelligent systems engineering (RISE) group at IIT Madras, which has nearly 80 members associated with it currently. He received his PhD from the University of Massachusetts, Amherst. He has nearly two decades of research experience in machine learning and, specifically, reinforcement learning.

Soumendra Mohanty is the Chief Strategy Officer & Chief Innovation Officer at Tredence. He has led key growth portfolios (IIOT, Data, Analytics, AI, Intelligent RPA, Digital Integration, Digital Experience, Platforms), bringing in world-class capabilities, innovative solutions, and transformation-led, outcomes-led value propositions to our clients. Under his leadership, Tredence has established a wide range of digital and data analytics capabilities and an enviable client-centric innovation culture to solve problems at the convergence of physical and digital.

With a career spanning over 25 years, Soumendra has held various leadership roles at Accenture (Global Data Analytics Lead), Mindtree (SVP & Digital Lead), L&T Infotech (EVP & CDAO), leading multi-faceted P&L functions, including M&A advisory for technology growth strategies and startup ecosystems.

Dr Aravind Chandramouli has a PhD in Computer Science from the University of Kansas with a focus on Information Retrieval and Machine Learning. He started his career at Google in 2007 and stopped at Microsoft, GE Research Labs, and Fidelity Investments over a 15-year career. Currently, he heads the AI CoE at Tredence with a focus on innovation. At Tredence, his team focuses on solving complex problems for clients using the right AI techniques. These problems span a wide range of data types like text, images/videos and structured data. He has six patent grants based on solving hard industry problems that had a direct impact on the stakeholders. He has won innovation awards at Microsoft, Fidelity Investments and Tredence. In addition to the patent and innovation awards, he also has over ten publications at top international conferences and journals.

Analytics India Magazine chronicles technological progress in the space of analytics, artificial intelligence, data science & big data by highlighting the innovations, players, and challenges shaping the future of India through promotion and discussion of ideas and thoughts by smart, ardent, action-oriented individuals who want to change the world.

Here is the original post:

Do You Want To Deploy Responsible AI In Your Organization? Join This Session To Operationalize Responsible AI - Analytics India Magazine

Read More..

In-Demand skills research finds the US is one of the most competitive markets for skilled tech workers, but talent scarcity is a global issue -…

ATLANTA, Nov. 9, 2021 /PRNewswire/ --Despite having one of the largest talent pools, the U.S. faces a major skills gap, with fewer than 10 qualified candidates for each in-demand IT and emerging technology vacancy. This is one of the findings of a report released today by global talent solutions leader Randstad Sourceright. The report highlights how growth in technologies supporting the internet of things, blockchain, cybersecurity, data science and other applications and services has led to an unprecedented and urgent demand for talent.

"The continued talent scarcity and skills gaps most pronounced in IT and emerging technology specialties is concerning to all employers," said Mike Smith, global CEO of Randstad Sourceright. "Companies need to respond in swift and informed ways by using data-driven market insights to attract and source highly skilled candidates. Employers should also consider expanding their recruiting efforts to tap into hybrid or remote talent pools."

Randstad Sourceright's Global Future In-Demand Skills Report, based on data from 26 markets around the world, identifies nine in-demand skills businesses are urgently seeking today and provides insights on the following factors: the potential candidate supply pool in each market, market competitiveness, the industries competing for these skills, the work experience and education levels of local labor pools, and compensation data.

The report found that the U.S. is one of the most competitive markets for all nine in-demand skills meaning it has the fewest number of skilled workers to fill available positions followed by India, China and the United Kingdom. The most in-demand skills are artificial intelligence and machine learning, augmented and virtual reality, blockchain, cloud computing, cybersecurity, data science, the internet of things, robotic process automation, and user interface/experience design.

Talent scarcity has increasingly plagued IT and technology companies, which have experienced unprecedented demand for their products and services, and which are now competing with employers across various industries for digital skills. Although the U.S., China and India have the largest talent pool across most roles, these markets also have high demand for these skills. Fields such as data science and cybersecurity were found to have the highest level of junior talent, while data science also has the most versatile education background with candidates possessing a variety of science, technology, engineering and mathematics (STEM) training.

For more information, download your copy of the 2021 Global Future In-Demand Skills Report.

About Randstad Sourceright

Randstad Sourceright is a global talent solutions leader, driving the talent acquisition and human capital management strategies for the world's most successful employers. We empower these companies by leveraging a Human Forward strategy that balances the use of innovative technologies with expert insights, supporting both organizations and people in realizing their true potential.

As an operating company of Randstad N.V. the world's leading global provider of HR services with revenue of 20.7 billion Randstad Sourceright's subject matter experts and thought leaders around the world continuously build and evolve our solutions across recruitment process outsourcing (RPO), managed services programs (MSP) and total talent solutions.In 2020, Randstad helped more than twomillion candidates find a meaningful job with one of our 236,000 clients in 38 markets around the world and trained and reskilled more than 350,000 people.Read more atrandstadsourceright.com.

SOURCE Randstad Sourceright

View original post here:

In-Demand skills research finds the US is one of the most competitive markets for skilled tech workers, but talent scarcity is a global issue -...

Read More..