Page 2,188«..1020..2,1872,1882,1892,190..2,2002,210..»

Main Street Theater Deep Heart. Engaged Mind.

Main Street Theater Deep Heart. Engaged Mind.

FOR OUR MAINSTAGE AUDIENCE ONLY (IN RICE VILLAGE):

Patrons will be required to show proof of a negative COVID test result (within 48 hours). A vaccination card may be shown in lieu of the test. Photocopies or a photo on your phone of medical records will be accepted. Masks are strongly recommended but not required. Thank you in advance for your cooperation.

Features

MainStage audiences:Blind Trust Subscriptionsare now open!

Theater for YouthBlind Trust subscriptions are now open!

Applications dueApril 15, 2022.

By Liz Duffy AdamsMar 20 Apr 16, 2022+ Apr 10 discussion with the playwright!

In-Person Camps!Ages 4 18Multiple Locations

Based on the book by E. B. WhiteAdapted by Joseph RobinetteApr 12 May 13, 2022

Celebrate the Bards Birthday!Online Shakespeare with MST!April 23 at 7:00pm

Adapted by Caridad SvichBased on the novel by Mario Vargas-LlosaMay 14 Jun 5, 2022

June 11 July 31, 2022Based on theMGM Motion Picture

WE WELCOMEALL Abilities and DisabilitiesALL Gender IdentitiesALL Immigrants and RefugeesALL Nations of OriginALL Races and EthnicitiesALL Religions and CreedsALL Sexual OrientationsEVERYONE

";err += "You have some jquery.js library include that comes after the Slider Revolution files js inclusion.";err += "To fix this, you can: 1. Set 'Module General Options' -> 'Advanced' -> 'jQuery & OutPut Filters' -> 'Put JS to Body' to on";err += " 2. Find the double jQuery.js inclusion and remove it";err += "

Continued here:
Main Street Theater Deep Heart. Engaged Mind.

Read More..

6 ways to use your mind to control pain – Harvard Health

Meditation with guided imagery, which often involves imagining yourself in a restful environment, may reduce your need for pain medication.

Drugs are very good at getting rid of pain, but they often have unpleasant, and even serious, side effects when used for a long time. If you have backache, fibromyalgia, arthritis, or other chronic pain that interferes with your daily life, you may be looking for a way to relieve discomfort that doesn't involve drugs. Some age-old techniquesincluding meditation and yogaas well as newer variations may help reduce your need for pain medication.

Research suggests that because pain involves both the mind and the body, mind-body therapies may have the capacity to alleviate pain by changing the way you perceive it. How you feel pain is influenced by your genetic makeup, emotions, personality, and lifestyle. It's also influenced by past experience. If you've been in pain for a while, your brain may have rewired itself to perceive pain signals even after the signals aren't being sent anymore.

The Benson-Henry Institute for Mind-Body Medicine at Harvard-affiliated Massachusetts General Hospital specializes in helping people learn techniques to alleviate stress, anxiety, and pain. Dr. Ellen Slawsby, an assistant clinical professor of psychiatry at Harvard Medical School who works with patients at the Benson-Henry Institute, suggests learning several techniques so that you can settle on the ones that work best for you. "I tend to think of these techniques as similar to flavors in an ice cream store. Depending on your mood,you might want a different flavor of ice creamor a different technique," Dr. Slawsby says. "Practicing a combination of mind-body skills increases the effectiveness of pain relief."

The following techniques can help you take your mind off the pain and may help to override established pain signals.

1. Deep breathing. It's central to all the techniques, so deep breathing is the one to learn first. Inhale deeply, hold for a few seconds, and exhale. To help you focus, you can use a word or phrase to guide you. For example, you may want to breathe in "peace" and breathe out "tension." There are also several apps for smartphones and tablets that use sound and images to help you maintain breathing rhythms.

2. Eliciting the relaxation response. An antidote to the stress response, which pumps up heart rate and puts the body's systems on high alert, the relaxation response turns down your body's reactions. After closing your eyes and relaxing all your muscles, concentrate on deep breathing. When thoughts break through, say "refresh," and return to the breathing repetition. Continue doing this for 10 to 20 minutes. Afterward, sit quietly for a minute or two while your thoughts return. Then open your eyes and sit quietly for another minute.

3. Meditation with guided imagery. Begin deep breathing, paying attention to each breath. Then listen to calming music or imagine being in a restful environment. If you find your mind wandering, say "refresh," and call the image back into focus.

4. Mindfulness. Pick any activity you enjoyreading poetry, walking in nature, gardening, or cookingand become fully immersed in it. Notice every detail of what you are doing and how your senses and emotions are responding. Practice bringing mindfulness to all aspects of your life.

5. Yoga and tai chi. These mind-body exercises incorporate breath control, meditation, and movements to stretch and strengthen muscles. Videos and apps can help you get started. If you enroll in a yoga or tai chi class at a gym or health club, your health insurance may subsidize the cost.

6. Positive thinking. "When we're ill, we often tend to become fixated on what we aren't able to do. Retraining your focus on what you can do instead of what you can't will give you a more accurate view of yourself and the world at large," says Dr. Slawsby. She advises keeping a journal in which you list all the things you are thankful for each day. "We may have limitations, but that doesn't mean we aren't still whole human beings."

Image: Thinkstock

As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date of last review or update on all articles. No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.

Originally posted here:
6 ways to use your mind to control pain - Harvard Health

Read More..

Whistleblower says DeepMind waited months to fire a researcher accused of sexual misconduct – The Verge

A former employee at DeepMind, the Google-owned AI research lab, accuses the companys human resources department of intentionally delaying its response to her complaints about sexual misconduct in the workplace, as first reported by the Financial Times.

In an open letter posted to Medium, the former employee (who goes by Julia to protect her identity) says she was sexually harassed by a senior researcher for months while working at the London-based company. During this time, she was allegedly subject to numerous sexual propositions and inappropriate messages, including some that described past sexual violence against women and threats of self-harm.

Julia got in contact with the companys HR and grievance team as early as August 2019 to outline her interactions with the senior researcher, and she raised a formal complaint in December 2019. The researcher in question reportedly wasnt dismissed until October 2020. He faced no suspension and was even given a company award while HR was processing Julias complaint, leaving Julia fearing for her and her other female colleagues safety.

Although the Financial Times report says her case wasnt fully resolved until seven months after she first reported the misconduct, Julia told The Verge that the whole process actually took 10 months. She claims DeepMinds communications team used semantics to push back on the Financial Times story and shorten the amount of time it took to address her case.

It was in fact 10 months, they [DeepMind] argued it was only 7 because thats when the appeal finished, though the disciplinary hearing took another 2 months, and involved more rounds of interviews for me, Julia said. My point stands: whether it was 10 months or 7, it was far, far too long.

Besides believing her case was intentionally dragged out, Julia also claims two separate HR managers told her she would face disciplinary action if she spoke out about it. Her manager allegedly required her to attend meetings with the senior researcher as well, despite being partially aware of her report, the Financial Times says. While Julia herself didnt sign a non-disclosure agreement, many other DeepMind employees have.

In a separate post on Medium, Julia and others offered several suggestions as to how Alphabet (Google and DeepMinds parent company) can improve its response to complaints and reported issues, such as doing away with the NDA policy for victims and setting a strict two-month time limit for HR to resolve grievances.

The Alphabet Workers Union also expressed support for Julia in a tweet, noting: The NDAs we sign should never be used to silence victims of harassment or workplace abuse. Alphabet should have a global policy against this.

In a statement to The Verge, DeepMind interim head of communications Laura Anderson acknowledged the struggles Julia went through but avoided taking accountability for her experiences. DeepMind takes all allegations of workplace misconduct extremely seriously and we place our employees safety at the core of any actions we take, Anderson said. The allegations were investigated thoroughly, and the individual who was investigated for misconduct was dismissed without any severance payments... Were sorry that our former employee experienced what they did and we recognise that they found the process difficult.

DeepMind has faced concerns over its treatment of employees in the past. In 2019, a Bloomberg report said DeepMind co-founder Mustafa Suleyman, also known as Moose, was placed on administrative leave for the controversy surrounding some of his projects. Suleyman left the company later that year to join Google. In 2021, a Wall Street Journal report revealed that Suleyman was deprived of management duties in 2019 for allegedly bullying staff members. Google also launched an investigation into his behavior at the time, but it never made its findings public.

If anyone finds themselves in a similar situation: first, right now, before anything bad happens, join a union, Julia said in response to the broader concerns. Then if something bad happens: Document everything. Know your rights. Dont let them drag it out. Stay vocal. These stories are real, they are happening to your colleagues.

Correction April 5th 6:51PM ET: A previous version of the story stated Julia signed an NDA. She did not, but other DeepMind employees have. We regret the error.

Go here to see the original:
Whistleblower says DeepMind waited months to fire a researcher accused of sexual misconduct - The Verge

Read More..

Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space – BSA bureau

"Establishing decentralized Machine learning (ML) framework optimises and accelerates clinical decision-making for evidence-based medicine" says Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise

The healthcare industry is becoming increasingly information-driven. Smart machines are creating a positive impact to enhance capabilities in healthcare and R&D. Promising technologies are aiding healthcare staff in areas with limited resources, helping to achieve a more efficient healthcare system. Yet, with all its benefits, using data to deliver more value-based care is not without risks. Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise, Singapore shares further details on the establishment of a decentralized machine learning framework while ensuring compliance with data governance regulations.

Technology will be indispensable in the future of healthcare, with advancements in various technologies such as artificial intelligence (AI), robotics, and nanotechnology. Machine learning (ML) a subset of AI now plays a key role in many health-related realms, such as disease diagnosis. For example, ML models can assist radiologists to diagnose diseases, like Leukaemia or Tuberculosis, more accurately and more rapidly. By using ML algorithms to evaluate imaging such as chest X-rays, MRI, or CT scans, and applying ML to analyse medical imaging, radiologists can better prioritise which potential positive cases to investigate. Similarly, ML models can be developed to recommend personalised patient care, by observing various vital parameters, sensors, or electronic health records (EHRs). The efficiency gains that ML offers stand to take the pressure off the healthcare system especially valuable when resources are stretched and access to hospitals and clinics are disrupted.

Data underpins these digital healthcare advancements. Healthcare organisations globally are embracing digital transformation and using data to enhance operations. Yet, with all its benefits, using data to deliver more value-based care is not without risks. For example, using ML for diagnostic purposes requires a diverse set of data in order to avoid bias. But, access to diverse data sets is often limited by privacy regulations in the health sector. Healthcare leaders face the challenge of how to use data to fuel innovation in a secure and compliant manner.

For instance, HPEs Swarm Learning, a decentralized machine learning framework allows insights generated from data to be shared without having to share the raw data itself. The insights generated by each owner in a group are shared, allowing all participants to still benefit from the collaborative insights of the network. In the case of a hospital thats building an ML model for diagnostics, Swarm Learning enables decentralized model training that benefits from access to insights of a larger data set, while respecting privacy regulations.

Partnering with stakeholders across the public and private sectors will enable us to better provide patients access to new digital healthcare solutions that can reform the management of challenging diseases such as cancer. Our recent partnership with AstraZeneca, under their A. Catalyst Network aims to drive healthcare improvement across Singapores healthcare ecosystem. Further, Swarm Learning can reduce the risk of breaching data governance regulations and can accelerate medical research.

The future of healthcare lies in working in tandem with technology; innovations in the AI and ML space are already being implemented across the treatment chain in the healthcare industry, with successful case studies that we can learn from. From diagnosis to patient management, AI and ML can be used to perform tasks such as predicting diseases, identifying high-risk patients, and automating hospital operations. As ML models are increasingly used in the diagnosis of diseases, there is an increasing need for data sets covering a diverse set of patients. This is a challenging demand to fulfill due to privacy and regulatory restrictions. Gaining insights from a diverse set of data without compromising on privacy might help, as in Swarm Learning.

AI models are used in precision medicine to improve diagnostic outcomes through integration and by modeling multiple data points, including genetic, biochemical, and clinical data. They are also used to optimise and accelerate clinical decision-making for evidence-based medicine. In the sphere of life sciences, AI models are used in areas such as drug discovery, drug toxicity prediction, clinical trials, and adverse event management. For all these cases, Swarm Learning can help build better models by collaborating across siloed data sets.

As we progress towards a technology-driven future, the question of how humans and technology can work hand in hand for the greater good will remain a question to be answered. But I believe that we will be able to maximise the benefits of digital healthcare, as long as we continue to facilitate collaboration between healthcare and IT professionals to bridge the existing gaps in the industry.

Visit link:
Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space - BSA bureau

Read More..

Privacy And Cybersecurity Risks In Transactions Impacts From Artificial Intelligence And Machine Learning, Addressing Security Incidents And Other…

To print this article, all you need is to be registered or login on Mondaq.com.

Cyberattacks. Data breaches. Regulatory investigations. Emergingtechnology. Privacy rights. Data rights. Compliance challenges. Therapidly evolving privacy and cybersecurity landscape has created aplethora of new considerations and risks for almost everytransaction. Companies that engage in corporate transactions andM&A counsel alike should ensure that they are aware of andappropriately manage the impact of privacy and cybersecurity riskson their transactions. To that point, in this article we provide anoverview of privacy and cybersecurity diligence, discuss the globalspread of privacy and cybersecurity requirements, provide insightsrelated to the emerging issues of artificial intelligence andmachine learning and discuss the impact of cybersecurity incidentson transactions before, during and after a transaction.

There is a common misunderstanding that privacy matters only forcompanies that are steeped in personal information and thatcybersecurity matters only for companies with a business modelgrounded in tech or data. While privacy issues may not be the mostcritical issues facing a company, all companies must addressprivacy issues because all companies have, at the very least,personal information about employees. And as recent publicizedcybersecurity incidents have demonstrated, no company, regardlessof industry, is immune from cybersecurity risks.

Privacy and cybersecurity are a Venn diagram of legal concepts:each has its own considerations, and for certain topics theyoverlap. This construct translates into how privacy andcybersecurity need to be addressed in M&A: each stands alone,and they often intermingle. Accordingly, they must both beaddressed and considered together.

Privacy requirements in the U.S. are a patchwork of federal andstate laws, with several comprehensive privacy laws now in effector soon to be in effect at the state level. Notably, while itdoesn't presently apply in full to personnel andbusiness-to-business personal data, the California Consumer PrivacyAct covers all residents of the state of California, not justconsumers (despite confusingly calling residents"consumers" in the law). Further, there are specificlaws, such as the Illinois Biometric Information Privacy Act andthe Telephone Consumer Protection Act, that add further, morespecific privacy considerations for certain business activities.And while there is an assortment of laws with a wide variety ofenforcement mechanisms from private rights of action to regulatorycivil penalties or even disgorgement of IP, one consistent trend isthe increasing potential for financial liability that can befall anon-compliant entity.

Laws in the U.S. related to cybersecurity compliance are not ascommon as laws related to responding to and notifying of a databreach. In recent years, specific laws and regulations have largelyfocused on the healthcare and financial services industries.However, legislative and regulatory activity is expanding in thisspace, requiring increasingly specific technological,administrative and governance safeguards for cybersecurity programswell beyond these two industries. Additionally, while breachresponse and notification where sensitive personal data is impactedhas been a well-established legal requirement for several yearsnow, increasingly complex cyber-attacks on private and publicentities has expanded the focus of cybersecurity incident reportingrequirements and enterprise cybersecurity risk considerations.

What Does This All Mean for Diligence?

For the buy side, identifying the specifics of what data, datauses and applicable laws are relevant to the target company ispivotal to appropriately understanding the array of risks that maybe present in the transaction. Equally, at least basictechnological cybersecurity diligence is important to understandthe risks of the transaction and potential future integration. Forthe sell side, entities should be prepared to address their data,data uses and privacy and cybersecurity obligations in diligencerequests.

Separately, privacy and cybersecurity diligence should not focussolely on the risks created by past business activity but alsoconsider future intentions for the data, systems and company'sbusiness model. If an entity is looking to make an acquisitionbecause it will be able to capitalize on the data that the acquiredentity has, then diligence should ensure that those intended useswon't be legally or contractually problematic. This issue isbest known earlier than later in the transaction, as it may impactthe value of the target or even the desire to move ahead.

In the event that diligence uncovers concerns, some privacy andcybersecurity risks will warrant closing conditions and/or specialindemnities to meet the risk tolerance of the acquiring entity. Inintense situations, such as where a data breach happens or isidentified during a transaction, there may even be a pricerenegotiation. Understanding the depth and presence of these risksshould be front of mind for any entity considering a sale to allowfor timely identification and remediation and in some instances tounderstand how persistent risks may impact the transaction if itmoves ahead. For all of these situations, privacy and cybersecurityspecialists are critical to the process.

The prevalence of global business, even for small entities thatmay have overseas vendors or IT support, creates additional layersof considerations for privacy and cybersecurity diligence.

Privacy and cybersecurity laws have existed in certainjurisdictions for years or even decades. In others, the expandedcreation of, access to and use of digital data, along withexemplars like the European Union (EU) General Data ProtectionRegulation, have caused a profound uptick in comprehensive privacyand cybersecurity laws. Depending on how you count, there are closeto or over 100 countries with such laws currently or soon to be inplace. This proliferation and dispersion of legal requirementsmeans a compounding of risk considerations for diligence.

Common themes in recently enacted and proposed global privacyand cybersecurity laws include data localization, appointed companyrepresentatives, restrictions on use and retention, enumeratedrights for individuals and significant penalties. Moreover, asidefrom comprehensive laws that address privacy and cybersecurity,other laws are emerging that are topic-specific. For example, theEU has a rather complex proposed law related to the use ofartificial intelligence. It is critical to ensure that theappropriate team is in place to diligence privacy and cybersecurityfor global entities and to help companies take appropriaterisk-based approaches to understanding the global complianceposture. It can be difficult to strike a balance in diligencepriorities due to both the growing number of new global laws andthe lack of many (or any) historical examples of enforcement forthese jurisdictions. But robust fact-finding paired with continueddiscussions on risk tolerance and business objectives, and carefulconsideration of commercial terms, will help.

As mentioned, artificial intelligence is a hot topic for privacyand cybersecurity laws. One of the biggest diligence risks relatedto artificial intelligence and machine learning (AI/ML) is notidentifying that it's being used. AI/ML is a technicallyadvanced concept, but its use is far more prevalent than may beimmediately understood when looking at the nature of an entity.Anything from assessing weather impacts on crop production todetermining who is approved for certain medical benefits caninvolve AI/ML. The unlimited potential for AI/ML applicationcreates a variety of diligence considerations.

Where AI/ML is trained or used on personal data, there can besignificant legal risks. The origin of training data needs to beunderstood, and diligence should ensure that the legal support forusing that data is sound. In fact, the legal ability to use allinvolved data should be assessed. Companies commonly treat all dataas traditional proprietary information. But privacy laws complicatethe traditional property-law concepts, and even if laws permit theuse of data, contracts may prohibit it. Recent legal actions haveshown the magnitude of penalties a company can face for wronglyusing data when developing AI/ML. Notably, in 2021 the FTCdetermined that a company had wrongly used photos and videos fortraining facial recognition AI. As part of the settlement, the U.S.Federal Trade Commission ordered that all models and algorithmsdeveloped with the use of the photos and videos be deleted. If acompany's primary offering is an AI/ML tool, such an ordercould have a material impact on the company.

Additionally, the use of AI/ML may not result in the intendedoutput. Despite efforts to use properly sourced data and avoidnegative outcomes, studies have shown that bias or other integrityissues can arise from AI/ML. This is not to say the technologycannot be accurate, but it does demonstrate that when performingdiligence it is crucial to understand the risks that may be presentfor the purposes and uses of AI/ML.

Security incidents have been the topic of many a headline overthe past few years. Some of these incidents are the result of thegrowing trend of ransomware or other cyber extortions, includingdata theft extortions or even denial-of-service extortion. Theidentification of a data security may well have a serious impact ona transaction. Moreover, transactions can be impacted by datasecurity incidents occurring before, during and after atransaction. Below we outline some key considerations for each.

An Incident Happened BEFORE a Transaction Started

An Incident Happens DURING a Transaction

An Incident Happens AFTER a Transaction

While far from the totality of privacy and cybersecurityconsiderations for transactions, these topics should help establisha baseline understanding of what to look for and how to approachprivacy and cybersecurity in the current legal environment.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Excerpt from:
Privacy And Cybersecurity Risks In Transactions Impacts From Artificial Intelligence And Machine Learning, Addressing Security Incidents And Other...

Read More..

The Federal Executive Forum’s Machine Learning and AI in Government 2022 – Federal News Network

Date:April 12, 2022Time:1 p.m. ETDuration:1 hourCost:No Fee

DescriptionMachine learning and artificial intelligence technology is very important in helping agencies with their people, processes and technology. But how are agencies utilizing this technology and what benefits do they see?

During this webinar, you will learn how federal IT practitioners from the Department of Veterans Affairs and Defense Intelligence Agency are implementing strategies and initiatives around machine learning and artificial intelligence.

The following experts will explore what the future of machine learning and AI in government means to you:

Panelists also will share lessons learned, challenges and solutions and a vision for the future.

Registration is complimentary. Please register using the form on this page or call (202) 895-5023.

By providing your contact information to us, you agree: (i) to receive promotional and/or news alerts via email from Federal News Network and our third party partners, (ii) that we may share your information with our third party partners who provide products and services that may be of interest to you and (iii) that you are not located within the European Economic Area.

More here:
The Federal Executive Forum's Machine Learning and AI in Government 2022 - Federal News Network

Read More..

AI and machine learning are the future of retail: Survey – ITP.net

Artificial intelligence and machine learning are changing the way retail works as it creates knowledge out of data that retailers can turn into action.

Sixty-five percent of decision makers at retail companies and organisations said AI and ML are mission-critical technologies, according to a survey sponsored by Rackspace Technology.

The technologies provide an opportunity to enhance customer experiences, improve revenue growth potential, undertake rapid innovation and create smart operations all of which can help businesses to stand out from the competition.

Fifty-eight percent of respondents in the retail space said AI and ML technologies are a high priority for their industry.

Sixty-nine percent reported AI and ML had a positive impact on brand awareness and on brand reputation (67 percent), as well as on revenue generation (72 percent) and on expense reduction (72 percent).

Meanwhile, 75 percent of respondents in retail say they are employing AI and ML as part of their business strategy, IT strategy or both.

Some 68 percent of retail respondents are allocating between 6 percent and 10 percent of their budget to AI and ML projects.

The technology is being used by retailers in an increasingly wide variety of contexts, including improving the speed and efficiency of processes (47 percent), personalising content and understanding customers (43 percent), increasing revenue (41 percent), gaining competitive edge (42 percent) and predicting performance (32 percent), and understanding marketing effectiveness (42 percent).

In an indication of the increasing maturity of the technologies, 66 percent of retail respondents said their AI/ML projects have gone past the experimentation stage and are now either in the optimising/innovating or formalising states of implementation.

There are however challenges when it comes to AI and ML adoption. Thirty-four percent of retail respondents cite difficulties aligning AI and ML strategies to the business.

From a talent perspective, more than half 61 percent of retail respondents said they have necessary AI and ML skills within their organisation.

At the same time, more than half of all respondents say that bolstering internal skills, hiring talent and improving both internal and external training are on their agenda.

Comparing departments, 69 percent of retail respondents say IT staff grasp AI and ML benefits while 46 percent in sales, 45 percent in R&D, 44 percent in senior management and boards, 41 percent in customer service and operations and only 34 percent in marketing departments understand the benefits of these technologies.

Read more from the original source:
AI and machine learning are the future of retail: Survey - ITP.net

Read More..

Leverage machine learning on your iPhone to translate Braille with this free app – 9to5Mac

If you ever thought about learning Braille or just wanted to quickly translate something written in UEB to your iPhone, theres a new app that can help you with that.

Software engineer Aaron Stephenson started learning Braille a few years ago. To put his knowledge into practice, he built an app using CoreML and Vision to find Braille. Now, he has just released an app that can translate Braille (and more) using just your iPhone.

Braille Scanner allows users to take a photo of a piece of paper with Braille on it using their iPhones and then within seconds, its translated to text.

The developer explains his intention behind the project and also the limitations so far:

Braille Scanner was created to help transcribe from Braille to text. It uses a combination of machine learning and vision to do this. The current transcribing model uses Unified English Braille, grade 1, and Im planning on adding more in the coming app updates.

Here are the top features of Braille Scanner for iPhone users:

Since the app just launched, the developer asks for feedback whether users find incorrectly translated braille, so he can build a more accurate machine learning model.

Braille Scanner requires iOS 14.7 or later. Its free to download and you can find it here on the App Store.

What do you think of this initiative? Share your thoughts in the comment section below.

Related:

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Read more:
Leverage machine learning on your iPhone to translate Braille with this free app - 9to5Mac

Read More..

Praisidio Uses Machine Learning to Identify At-Risk Employees and Build Tailored Retention Plans with Procaire 3.0 – PR Newswire

New machine learning-driven retention path technology identifies urgently needed actions and enables HR executives to take immediate steps to retain at-risk employees

SAN FRANCISCO, April 5, 2022 /PRNewswire/ -- Praisidio, the leader in talent retention management, today announced the general availability of Procaire 3.0, which includes new patent-pending retention path functionality. Retention paths, auto-generated by machine learning technology, feature curated groups of employees with similar risk factors and include specific retention recommendations. Support for user-defined retention paths is also provided.

Procaire 3.0's retention recommendation engine presents contextually effective recommendations which HR professionals may choose and track. Retention paths enable HR leaders to take immediate actions to significantly reduce voluntary employee attrition.

Additionally, Procaire 3.0 includes retention impact dashboards that reflect in real-time the cumulative business impact of implemented retention actions. Metrics shown include retention improvement, maker time increases, management one-on-one improvement, time in role decreases, etc.

"Procaire provides us early visibility into the causes of attrition, recommends retention activities, and measures the impact of our HR organization's proactive actions. With Procaire retention paths, we were able to identify the main causes of attrition with employees grouped into risk and cause cohorts, allowing us to target retention activities across the company," said Gail Jacobs, Head of Talent and HR Operations, Guardant Health.

"With Procaire retention paths, I was able to identify the main problems in my organization and help our employees. In one example, I helped my organization increase their weekly maker time significantly to reduce the risk of Zoom burnout" said Iga Opanowicz, Sr. People Generalist, Guardant Health.

Customers can use Retention Paths to address groups of employees with similar risk factors such as bias, burnout, stagnation, and disconnection. Moreover, critical employees are surfaced in high-risk cohorts or groups who report to high-attrition managers.

Ben Eubanks, Chief Research Officer of Lighthouse Research & Advisory, remarked: "Our research shows that employers struggle with retention because it's hard to know what specific steps to take. With Procaire retention paths, HR professionals now have the power of machine learning at their fingertips and can easily see the exact retention drivers for their best employees."

After retention actions are taken, Procaire helps ensure follow-up and follow-through via retention workflows and optimizes future recommendations by gauging action efficacy over time.

Procaire 3.0 is immediately available.

About Praisidio

Praisidio is a talent retention management company solving employee attrition. Praisidio's Procaire unifies enterprise and HCM data, applies advanced machine learning, reveals talent risks early in real-time, provides actionable insights, root cause explanations, comparisons, recommendations, and enables employee care at scale to improve employee engagement and retention materially. For more information, visit http://www.praisidio.com.

For media contact, please reach out at[emailprotected]

SOURCE Praisidio, Inc.

See more here:
Praisidio Uses Machine Learning to Identify At-Risk Employees and Build Tailored Retention Plans with Procaire 3.0 - PR Newswire

Read More..

California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision…

The California Fair Employment and Housing Council (FEHC) recently took a major step towards regulating the use of artificial intelligence (AI) and machine learning (ML) in connection with employment decision-making. On March 15, 2022, the FEHC published Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, which specifically incorporate the use of "automated-decision systems" in existing rules regulating employment and hiring practices in California.

The draft regulations seek to make unlawful the use of automated-decision systems that "screen out or tend to screen out" applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity. The draft regulations also contain significant and burdensome recordkeeping requirements.

Before the proposed regulations take effect, they will be subject to a 45-day public comment period (which has not yet commenced) before FEHC can move toward a final rulemaking.

"Automated-Decision Systems" are defined broadly

The draft regulations define "Automated-Decision Systems" broadly as "[a] computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants."

The draft regulations provide the following examples of Automated-Decision Systems:

Similarly, "algorithm" is broadly defined as "[a] process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision."

Notably, the scope of this definition is quite broad and will likely cover certain applications or systems that may only be tangentially related to employment decisions. For example, the term "or facilitates human decision making" is ambiguous. A broad reading of that term could potentially allow for the regulation of technologies designed to aid human decision-making in small or subtle ways.

The draft regulations would make it unlawful for any covered entity to use Automated-Decision Systems that "screen out or tend to screen out" applicants or employees on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity

The draft regulations would apply to employer (and covered third-party) decision-making throughout the employment lifecycle, from pre-employment recruitment and screening, through employment decisions including pay, advancement, discipline, and separation of employment. The draft regulations would incorporate the limitations on Automated-Decision Systems to apply to characteristics already protected under California law.

The precise scope and reach of the draft regulations are ambiguous in that key definitions define Automated-Decision Systems as those systems that screen out "or tend to screen out" applicants or employees on the basis of a protected characteristic. No clear explanation of the scope of the phrase "tend to screen out" is offered in the proposed regulations, and the inherent ambiguity of the language itself presents a real risk that these regulations will extend to certain systems or processes that are not involved in screening applicants or employees on the basis of a protected characteristic.

The draft regulations apply not just to employers, but also to "employment agencies," which could include vendors that provide AI/ML technologies to employers in connection with making employment decisions

The draft regulations apply not just to employers, but also to "covered entities," which include any "employment agency, labor organization[,] or apprenticeship training program." Notably, "employment agency" is defined to include, but is not limited to, "any person that provides automated-decision-making systems or services involving the administration or use of those systems on an employer's behalf."

Therefore, any third-party vendors that develop AI/ML technologies and sell those systems to third-parties using the technology for employment decisions are potentially liable if their automated-decision system screens out or tends to screen out an applicant or employee based on a protected characteristic.

The draft regulations require significant recordkeeping

Covered entities are required to maintain certain personnel or other employment records affecting any employment benefit or any applicant or employee. Under FEHC's draft regulations, those recordkeeping requirements would increase from two to four years. And, as relevant here, those records would include "machine-learning data."

Machine-learning data includes "all data used in the process of developing and/or applying machine-learning algorithms that are used as part of an automated-decision system." That definition expressly includes datasets used to train an algorithm. It also includes data provided by individual applicants or employees. And it includes the data produced from the application of an automated-decision system operation (i.e., the output from the algorithm).

Given the nature of algorithms and machine learning, that definition of machine-learning data could require an employer or vendor to preserve data provided to an algorithm not just four years looking backward, but to preserve all data (including training datasets) ever provided to an algorithm and extending for a period of four years after that algorithm's last use.

The regulations add that any person who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system to an employer or other covered entity, must maintain records of "the assessment criteria used by the automated-decision system for each such employer or covered entity to whom the automated-decision system is provided."

Additionally, the draft regulations would add causes of action for aiding and abetting when a third party provides unlawful assistance, unlawful solicitation or encouragement, or unlawful advertising when that third party advertises, sells, provides, or uses an automated-decision system that limits, screens out, or otherwise unlawfully discriminates against applicants or employees based on protected characteristics.

Conclusion

The draft rulemaking is still in a public workshop phase, after which it will be subject to a 45-day public comment period, and it may undergo changes prior to its final implementation. Although the formal comment period has not yet opened, interested parties may submit comments now if desired.

Considering what we know about the potential for unintended bias in AI/ML, employers cannot simply assume that an automated-decision system produces objective or bias-free outcomes. Therefore, California employers are advised to:

View original post here:
California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision...

Read More..