Category Archives: Deep Mind
New PhD graduate Guodong Zhang hopes to lead the next … – University of Toronto
As issues around the usage and ethics of artificial intelligence (AI) continue to evolve, new University of Toronto graduate Guodong Zhang is ready to address those challenges.
The public should embrace AI as a catalyst for positive change, says Zhang, who graduates this month with his PhD from the department of computer science in the Faculty of Arts & Science.
Ensuring that AI systems align with human values and remain under human control becomes increasingly critical. Addressing AI safety is one of the most important and impactful problems we face today.
Zhang intends to be part of the solution. He applied to U of T for his PhD several years ago, inspired by the godfather of AIGeoffrey Hinton,University Professoremeritus in the department.
As a PhD student, Zhang taught many courses at U of T. His study of theoretical foundations and practical algorithms for machine learning have already been adopted by major AI players, including Google Brain, DeepMind and OpenAI.
Ahead of convocation, Faculty of Arts & Science writer David Goldberg spoke with Zhang about his research and how U of T prepared him for a future career in AI.
Why was U of T the best place to earn your PhD?
I was captivated by the immense potential of deep learning, so U of T was an obvious choice for me given its leadership in this area. Geoffrey Hinton and his students shocked the world with their results on ImageNet in 2012 with AlexNets, a neural network architecture which started a golden age for deep learning. Furthermore, the prospect of collaborating with the Vector Institute rendered U of T even more special.
How do you explain your work with AI to people outside your field?
I focused on developing neural network models and algorithms that excel in fast training, robust generalization and accurate uncertainty estimation. With neural networks often comprising millions or billions of parameters, the challenge lies in understanding effective optimization techniques for these networks. I am also exploring the phenomenon of why neural networks have such impressive generalization abilities. And finally, another key aspect of my research was investigating whether neural networks can possess awareness of their knowledge gaps.
How is your work with AI going to improve life for the average person?
My research on neural network training dynamics holds significant importance in the realm of large language models and AI research. These models used in programs such as ChatGPT, which have become ubiquitous in our daily lives, play a vital role in various applications. For example, they assist us with translation, enhance our essay writing and serve as virtual assistants to address our queries.
Theres controversy surrounding some of the ways AI is being used why do you think people need to embrace its potential?
The public should embrace AI as a catalyst for positive change because it enhances efficiency and productivity across industries, automates mundane tasks allowing for more meaningful work and improves problem-solving and decision-making through data analysis. In addition, it augments human capabilities and drives innovation while also helping us to address societal challenges like inequality and sustainability.
Embracing AI responsibly ensures transparency, accountability and ethical considerations, unlocking AI's potential for positive impact in our society. I think the public should also be involved in regulating AI, as AI systems could be very powerful and misuse of them could lead to catastrophic consequences.
What career path will you pursue after graduation and how will your U of T education help you excel?
I will work as an AI researcher in industry, focusing on large language models. My education at U of T has equipped me with extensive knowledge in deep learning and artificial intelligence. Under the guidance of my advisor, Associate Professor Roger Grosse, and collaboration with colleagues, I have gained valuable insights into neural network training dynamics and AI safety.
This expertise enables me to enhance the efficiency of large language model training while ensuring alignment with human values. My PhD work on understanding neural network training with a noisy quadratic model has already been used by many big industrial labs (including Anthropic, DeepMind, OpenAI) in training the latest models.
What advice do you have for people considering their PhD in your field or beyond?
I recommend everyone maintain a curious mindset and pursue their passions. Curiosity is vital for scientific progress. It is also crucial to remain open-minded and committed to lifelong learning. Our field is rapidly evolving, rendering knowledge from just a few years ago potentially outdated, so continuous learning is essential.
See the original post:
New PhD graduate Guodong Zhang hopes to lead the next ... - University of Toronto
Data Protection update – May 2023 – Stephenson Harwood
Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from May 2023.
The month of May marked the fifth anniversary of the EU GDPR and it was commemorated with a bang.
Just days before the GDPRs official birthday, Meta was served a record 1.2 billion fine for data protection breaches. The fine, the largest ever imposed under the GDPR, came after Irelands Data Protection Commission found the tech giant had violated the law by transferring personal data of EU Facebook users to the US without appropriate safeguards.
Meta has six months to remediate the unlawful processing, including storage, in the US of personal data of European users. Andrea Jelinek, chair of the European Data Protection Board, said the unprecedented fine sends a strong signal to organisations that serious infringements have far-reaching consequences.
Still, Meta hasnt taken the decision lying down. In a statement, the tech company vowed to appeal the ruling, which it says could have implications for thousands of businesses which rely on the ability to transfer data between the EU and US in order to operate.
May also saw the Court of Justice of the European Union hand down four pivotal preliminary rulings related to the application of the EU GDPR. The rulings clarified the law in relation to four legal issues: the accountability principle, the right of access under Article 15 EU GDPR, compensation under Article 82 EU GDPR and joint controllers.
In this months issue:
On 22 May, the Irish Data Protection Commission ("DPC") announced that Meta has been fined 1.2 billion the largest fine to date issued under the EU General Data Protection Regulation ("EU GDPR").
The DPC's decision against Meta has three parts:
With the EU-US draft adequacy agreement still not in place (the European Parliament voted against the proposed agreement in a non-binding resolution earlier in May), the DPC's decision lands Meta's US-EU data transfers in a difficult, uncertain position. The decision also has profound ramifications for anyone transferring personal data to the US under the EU GDPR, as it demonstrates that it may be very difficult to do so lawfully under any of existing legal mechanisms and derogations, in light of the incompatibility of US law with European fundamental rights. The issue is especially difficult for transfers to any electronic communications service provider (such as Meta) that may be required to hand over European data to US national security agencies under the US federal law FISA.
For further analysis of the DPC's decision and what it means for any business making overseas transfers, look out for our upcoming Insight deep dive on our data protection hub.
On 4 May, the Court of Justice of the European Union ("CJEU") handed down four preliminary rulings relating to the application of the EU GDPR.
The CJEU considered:
For more information on these decisions, read our Insight.
On 15 May, the UK government announced that it is scaling back the Retained EU Law (Revocation and Reform) Bill ("REUL Bill"). The government provided a revised list outlining which pieces of legislation are being revoked with justifications provided for each.
Since Brexit, over 1,000 EU laws have been revoked or reformed in the UK. The REUL Bill will revoke a further 600 laws, in addition to the 500 pieces of legislation that will be revoked by the Financial Services and Markets Bill and the Procurement Bill. The government justifies this decision by stating that it will lighten the regulatory burden for businesses and encourage economic growth.
This decision reflects a scaled down promise in contrast to the government's initial plans to scrap thousands of EU laws by the end of this year. However, in its press release, the government outlined plans to continue reviewing remaining EU laws in order to identify further opportunities for reform. The REUL Bill creates a mechanism that enables this ongoing aim of revoking EU law.
Some minor pieces of data protection legislation will be revoked by the REUL bill, such as the Data Retention and Acquisition Regulations 2018. However, more significantly, the government has stated that it will remove the current interpretive principles and the structure providing for the supremacy of all EU law. This means UK courts could be permitted to overrule EU precedents and there will be significant uncertainty as to how to interpret terms from retained EU laws. In the context of data protection, there may be uncertainty as to the supremacy and interpretation of the UK General Data Protection Regulation ("UK GDPR").
The REUL Bill will return to the House of Commons after the House of Lords concludes its debate.
Stay tuned for further updates on how post-Brexit regulatory reform will affect data protection in the UK.
On 17 April, the Data Protection and Digital Information (No. 2) Bill ("DPDI Bill") had its second reading in the House of Commons. This provided us with our first opportunity to hear what MPs had to say about the DPDI Bill. Their primary concerns surrounded retaining its adequacy with the EU and the struggle to balance the interests of big tech and consumers. For more information on the second reading, read our Insight.
Following this, the DPDI Bill moved to Committee stage. This stage involves a House of Commons committee hearing evidence and conducting a detailed examination of a bill. On 10 May, a House of Commons committee heard evidence from 23 witnesses. John Edwards, the UK Information Commissioner, was among those providing evidence.
Edwards assisted the committee with a forensic analysis of the wording of the DPDI Bill. He outlined that the use of phrases such as 'high-risk activities' does not provide decision-makers with sufficient clarity when interpreting legislation. Edwards argued that the ICO and other decision-makers would appreciate further, clear criteria to assist them with issuing guidance and interpreting the legislation. Removing as much uncertainty as possible from the DPDI Bill should be the aim as this will enable greater efficiency. Edwards also outlined his concerns surrounding the future role of ministers with the current DPDI Bill providing scope for ministers to overrule the ICO and refuse to publish its statutory codes, threatening to undermine the independence of the ICO.
Other witnesses expressed concerns relating to the DPDI Bill's provisions on automated decision-making and its impact on the UK retaining adequacy with the EU.
The DPDI Bill will now move to its third reading, representing the House of Commons' final chance to debate the contents of the bill and vote on its approval. If approved, the DPDI Bill will move to the House of Lords for consideration.
On 4 May, leaders of some of Europe's largest technology companies wrote to the European Commission outlining their concerns regarding the EU's forthcoming Data Act.
As we previously reported, the Data Act will bring in a new landscape for data access, data portability and data sharing. It includes provisions that introduce common rules on the sharing of data generated by connected products or related services and will compel data holders to make data available to public bodies without undue delay where there is an exceptional need for the public body to use the data. The European Commission are adamant that the Data Act will ensure fairness in the digital environment, stimulate a competitive data market, open opportunities for data-driven innovation and make data more accessible for all.
However, the concerns raised in this letter from the technology companies suggest that not all stakeholders agree on whether the Data Act is on track to achieve its aims. The letter was organised by DigitalEurope and is signed by chief executives of Siemens, Siemens Healthineers, SAP, Brainlab and Datev. The letter expressed concerns around supporting European competitiveness and protecting businesses against cyber attacks and data breaches. The letter outlined three key concerns:
Executives at SAP say that they welcome the objectives of the Data Act to create a common EU regulatory framework and facilitate data sharing. However, they insist that the Data Act needs further amendments in order to preserve contractual freedom, allowing providers and customers to agree on terms that reflect business needs.
The letter asks the European Commission to pause the process, enabling changes to the proposed Act. Time will tell whether the Data Act will be further delayed in the face of these concerns. The Swedish presidency entered into negotiations (or 'trilogue') with the European Parliament on the final version of the Data Act in March and further trilogues are expected to take place in May and beyond.
The ICO, the UK Data Protection Authority ("DPA"), issued new guidance for businesses and employers on Employee Data Subject Access Requests ("DSARs").
Data subjects have the right of access under the EU GDPR, meaning they can request a copy of their personal information from organisations. This is a right often exercised by employees against their employers or former employers. Employees can request any personal data held by the employer, such as attendance details, sickness records or personal development and other HR records. The ICO reported in its press release that it received 15,848 complaints relating to DSARs between April 2022 and March 2023. In light of this, it has now released new, enhanced guidance on how employers should respond to DSARs.
The new guidance covers key issues, including the following points:
For more information, you can access the ICO's full guidance here.
In the midst of growing anxiety across the tech industry about the potential impact of AI, and some more stark warnings from industry experts including George Hinton (so called "godfather of AI") that the recent rapid development in the capabilities of AI may pose an existential risk to humankind unless urgent action is taken, Prime Minister Rishi Sunak appears to be contemplating charting an alternative approach to the UK's regulation of AI, with reports that the government is considering tighter regulation and talk of a new global regulator (or at least the creation of a new UK AI-focused watchdog).
Back in March, we reported that the UK Government had published a white paper outlining its plans to regulate AI (the "AI White Paper"). The government's intention was for the AI White Paper to foster a pro-innovation approach to AI regulation which focusses on its benefits and potential whilst avoiding unnecessary burdens to business and economic growth. The AI White Paper is currently open for consultation, which is set to conclude on 21 June, although industry figures have warned that the AI White Paper is now already out of date.
The government may concede that there has been a shift in its approach since the AI White Paper was published, with reports of government insiders insisting that they "want to stay nimble, because the technology is changing so fast", and expressing their wish to avoid the product-by-product regulatory regime, such as the one that is envisaged by the EU's AI Act.
It appears that Sunak may also be applying pressure on the UK's allies, seeking to construct an international agreement in relation to how to develop AI capabilities, which could entail the establishment of a global regulator. Given that the EU has been unable to reach an agreement since the draft AI Act was published over two years ago, Sunak's plan to formulate and subsequently agree such an international agreement in a short period of time appears somewhat optimistic.
Domestically, MPs from both the Conservative and Labour party are calling for an AI bill to be passed, which might set certain conditions for companies seeking to create and develop AI in the UK and lead to the creation of a UK regulator. It remains to be seen what approach the government will take to regulating AI in the UK and what aspiration it has to lead on such regulation on the global stage.
Over in the US, American lawmakers are arguing that federal regulation of AI is necessary for innovation. Speaking at an event in Washington, DC, on 9 May, US Representative Jay Obernolte said that regulation to mitigate potential harms and provide customer protection is something which "is very clearly necessary when it comes to AI." Obernolte further stressed that regulation of data privacy and AI must coincide, given the vast amounts of information AI models require to learn and AI's ability to pierce digital data privacy, reaggregate personal data and build behavioural models to predict and influence behaviour.
In early May, the Biden Administration (the "Administration") announced new actions which it says are set to further promote responsible American innovation in AI as well as protect people's rights and safety. Emphasising the need to place people and communication at the centre of AI development, by supporting responsible innovation that serves the public good, the Administration said that companies have a fundamental responsibility to ensure that the products they provide are safe prior to deployment for public use.
The Administration has also announced an independent commitment from leading AI developers including Google, Microsoft, NVIDIA and OpenAI to participate in a thorough public evaluation of AI systems. These actions all contribute to a broader and ongoing effort for the Administration to engage with a variety of stakeholders on critical AI issues.
Belinda Dennett, Microsoft Corporate Affairs Director, spoke to members of Australia's parliament at a parliamentary hearing on 3 May, to communicate her view that the government should collaborate with industry and society on principles-based measures or co-regulation with regard to AI, rather than taking a more targeted and direct regulatory response.
Dennett's comments reflect Microsoft's view that there is a risk in seeking to regulate what is known today in relation to generative AI technologies, as that knowledge can rapidly go out of date. The effect of this risk is such that any policy seeking to regulate generative AI would soon find itself trailing behind the development of the technology being regulated.
In making her remarks, Dennett specifically referred to the recent rapid enhancement in the capabilities of generative AI technologies such as ChatGPT and explained that "this was innovation we weren't expecting for another ten years." Dennett also praised calls which have been made for summits and various other discussions around the generative AI boom on the basis that, for AI, "society needs to decide where those guardrails should be."
Microsoft's comments come as Australia joins other jurisdictions needing to act quickly to determine how best to regulate AI and generative AI in particular, which we considered in our April 2023 bulletin.
In October 2022, Joseph Sullivan, Uber Technologies' former security chief, was convicted of obstruction of a federal proceeding and of concealing or failing to report a felony. Sullivan's conviction arose in connection with a 2016 cyber breach that affected 57 million Uber drivers and riders. In response to the breach, Sullivan devised a scheme by which the hackers who had breached Uber's network were paid $100,000 through the company's 'bug bounty' scheme and were induced into signing a non-disclosure agreement, such that Uber's legal team and the US Federal Trade Commission officials would not find out.
Sentenced in early May, Sullivan was handed a three-year term of probation and ordered to pay a fine of $50,000. Although Sullivan has avoided time in prison, US District Judge William Orrick made clear that if he were to preside over a similar case in the future "even if the character is that of Pope Francis, they should expect custody." Sullivan's case illustrates that corporate information security officers ("CISOs") should work with lawyers to establish whether a breach has occurred and whether it should be reported. It has also accelerated a transition whereby CISOs report breaches more directly to their organisation's senior executives.
Consequently, companies should now be reconsidering their processes for breach identification and the documentation of decisions regarding breaches in order to develop more robust breach response procedures. This will allow companies to cultivate a culture of shared responsibility for taking decisions associated with cybersecurity breaches, which will, in turn, assist CISOs with avoiding personal liability.
Following a year-long inquiry into the abuse of spyware in the EU, the European Parliament's Committee of Inquiry has adopted its final report and recommendations. The inquiry investigated the use of surveillance spyware such as "Pegasus", which can be covertly installed on mobile phones and is capable of reading text messages, location tracking, accessing the device's phone and camera as well as harvesting information from apps.
MEPs stated that the use of spyware in Hungary constitutes "part of a calculated and strategic campaign to destroy media freedom and freedom of expression by the government", and in Poland the use of Pegasus has been part of "a system for the surveillance of the opposition and critics of the government designed to keep the ruling majority and the government in power". To remedy these major violations of EU law, the MEPs called on Hungary and Poland to comply with European Court of Human Rights ("ECHR") judgments, restore judicial independence and oversight institutions as well as launch credible investigations into abuse cases to help ensure citizens have access to proper legal redress. In Greece, where spyware "does not seem to be part of an integral authoritarian strategy, but rather a tool used on an ad hoc basis for political and financial gains", MEPs called on the government to repeal export licences that are not in line with EU export control legislation. Elsewhere across the EU in Spain, although the country has "an independent justice system with sufficient safeguards", MEPs called on Spanish authorities to ensure "full, fair and effective" investigations.
In order that illicit spyware practices are stopped immediately, MEPs recommended that spyware should only ever be used by member states in which allegations of spyware abuse have been thoroughly investigated, national legislation is in line with recommendations of the Venice Commission and CJEU and ECHR case law, Europol is involved in investigations, and export licences not in line with export controls are repealed. MEPs further recommended that the Commission should assess whether these conditions are met by member states by December 2023. In order to prevent attempts to justify abuses, the MEPs also called for a common legal definition of 'national security' as grounds for surveillance.
MEPs adopted the report and recommendations and the text outlining the recommendations is expected to be voted on by the full Parliament during the plenary session starting on 12 June.
In a statement released earlier this month, Toyota Motor Corporation ("Toyota") confirmed that a human error rendered the vehicle data of around 2.15 million customers publicly accessible in a period spanning almost a decade from November 2013 to April 2023.
The incident, which Toyota states was caused by a "misconfiguration of the cloud environment", as a result of the cloud system having been accidentally set to public rather than private, meant that data including vehicle identification numbers and vehicle location data was potentially accessible by the public. Toyota have said that the accessible data alone was not sufficient to enable identification of the affected data subject and that there had been no reports of malicious use of the data.
Although it has confirmed that the data in question is confined to that of its Japanese customers, the number of potentially affected customers constitutes almost the entirety of Toyota's customer base who had signed up for its main cloud service platforms since 2012, which are essential to its autonomous driving and other AI-based offerings. Affected customers include those who use the T-Connect service, which provides a range of services such as AI-voice driving assistance, and also users of G-Link, which is a similar service for owners of Lexus vehicles.
The incident had only recently been discovered by Toyota as it targets an expansion of its connectivity services. Toyota said that the "lack of active detection mechanisms, and activities to detect the presence or absence of things that become public" was the cause of the failure to identify the issue earlier. Toyota has stated that it will take a series of measures to prevent a recurrence of the incident including implementing a system to audit cloud settings, establishing a system to continuously monitor settings and educating employees on data handling rules.
The Japanese Personal Information Protection Commission has been informed of the incident but has not provided comment at this stage. However, the Japanese automaker subsequently announced that customer information in some countries throughout Oceania and Asia may also have been left publicly accessible from October 2016 to May 2023. In this instance, potentially leaked customer data may include names, addresses, phone numbers and email addresses.
You can read Toyota's statement in Japanese here.
The High Court has brought Prismall v Google and DeepMind to an early conclusion, ruling that Andrew Prismall and the 1.6 million class members he represents cannot go to trial.
Andrew Prismall sued Google and DeepMind under the tort of misuse of private information on behalf of 1.6 million NHS patients after, in 2016, it was revealed that DeepMind transferred the patients' data without their knowledge or consent. To make this claim, Prismall was required to show that the class of patients had a reasonable expectation of privacy and that DeepMind deliberately and without justification obtained and used the data. Prismall also had to show that all members of the class had the same interest. This follows the principle set out in Lloyd v Google that a representative action cannot succeed if it requires an individualised assessment of class members' loss.
Prismall argued that, without needing an individualised assessment, he could show that each class member had a reasonable expectation of privacy in relation to the relevant personal data, this expectation was unjustifiably interfered with and such interference entitled them to an award of more than trivial damages. However, the court ruled that there was no realistic prospect of the class members meeting these requirements. The court found that:
Mrs Justice Williams struck out the case and ruled that a summary judgment should be entered in favour of Google and DeepMind.
The case was one of the few opt-out class actions that continued after the Lloyd v Google ruling narrowed the options for bringing such claims under the UK GDPR. It appears that misuse of private information was not a viable alternative in this case.
For more information, you can access the full judgment here.
A Belgian data subject complained to the Belgian DPA after being informed of his obligations under the US Foreign Account Tax Compliance Act ("FATCA") by his bank. The Belgian DPA has now ordered Belgium's Federal Public Service Finance to stop processing complainants' data in relation to FATCA transfers, arguing that such transfers breach the EU GDPR.
FATCA's aim is to combat tax fraud, money laundering and tax evasion. 87 countries have entered FATCA agreements with the US. Under FATCA, non-US banks must send information about any accounts held by American citizens to the corresponding non-US government, who then shares the information with the US Internal Revenue Service (the "IRS"). This information constitutes personal data under the EU GDPR.
The Belgian DPA originally decided that the FATCA transfers did not breach the EU GDPR and that Schrems II did not apply. However, the Belgian DPA's litigation arm disagreed. It found that data subjects are not able to understand the purposes of processing in relation to FATCA transfers and concluded that FATCA transfers breach the EU GDPR's purpose limitation, data minimisation and proportionality principles. The IRS failed to carry out a data protection impact assessment in relation to the transfers. In addition, the FATCA transfers were found not to be subject to appropriate safeguards. As a result, the Belgian DPA ordered that transfers of personal data to the US under the FATCA regime must cease.
This does not represent the only challenge to FATCA. A US-born data subject now residing in the UK has complained to the High Court that FATCA transfers are disproportionate and breach her rights under the EU GDPR. However, the impact of ceasing FATCA transfers is questionable. American Citizens Abroad, a non-profit organisation, commented that the Belgian DPA decision will not get rid of US tax problems for expats. It argued that the IRS has an obligation to enforce US tax laws and if the required information cannot be provided via FATCA transfers, it will come to light another way.
The US Federal Trade Commission ("FTC") filed a complaint against Meta in 2011, resulting in a 2012 privacy order barring Meta from misrepresenting its privacy practices. After a subsequent complaint from the FTC, relating to Meta's misrepresentations that fed into the Cambridge Analytica scandal, Meta agreed to another privacy order in 2020. This 2020 order compelled Meta to pay a $5 billion penalty.
In a press release dated 3 May, the FTC claims that Meta has now violated the privacy promises that it made in the 2020 privacy order. The FTC's claim is based on the following points:
As a result, the FTC proposes to make the following changes and extensions to the privacy order:
The FTC has requested that Meta responds to these claims within 30 days. Meta have pledged to robustly fight this action, labelling it a political stunt.
May saw the latest enforcement action against Clearview AI, following numerous recent sanctions against the facial recognition platform.
On 9 May, the Austrian DPA found that Clearview AI was not complying with the EU GDPR. Following a request for access, a data subject found that their image data had been processed by Clearview AI. The Austrian DPA found that Clearview AI processed the personal data without lawfulness, fairness and transparency and was in breach of data retention rules by permanently storing data. In addition, Clearview AI's processing of the data served a different purpose from the original publication of the data subject's personal data. The Austrian DPA ordered Clearview AI to erase the complainant's personal data and to designate a representative in the EU.
In another decision handed down in May, the Australian Administrative Appeals Tribunal ruled that Clearview AI's collection of Australian facial images without consent breached the country's privacy standards. As a result, the Australian authority ordered Clearview AI to leave the country and delete all Australian images that it had gathered.
This follows action taken against Clearview AI in April. The French DPA fined Clearview AI 5.2 million for its failure to comply with the DPA's earlier order to stop collecting and processing personal data of individuals located in France.
This wave of enforcement action reflects the ongoing battle of applying data protection requirements to ever-evolving AI technologies.
We reported in March that Marc Van der Woude, president of the EU's General Court, warned that a wave of Digital Markets Act ("DMA") litigation was looming. The DMA places obligations on Big Tech platforms (referred to as "Gatekeepers") to create a fairer environment for business users and to ensure that consumers can access better services and easily switch providers.
The first step of the DMA's implementation kicked off on 2 May. This step looks into the classification of certain platforms as Gatekeepers. Any platforms labelled with this designation will be prohibited from certain behaviours and practices. Three main criteria are involved in deciding whether a platform is a Gatekeeper:
Any organisations labelled as Gatekeepers will be subject to the DMA's list of dos and donts. For example, Gatekeepers must not prevent consumers from linking up to businesses outside the Gatekeeper's platform or prevent users from uninstalling any pre-installed software or app if they wish to.
By 3 July, potential Gatekeepers must notify their core platform services to the European Commission if they meet the DMA's thresholds. The European Commission, following such notifications, has 45 working days to assess whether the organisation is a Gatekeeper. Any designated Gatekeepers will then have 6 months to comply with the DMA's requirements.
Each month, we bring you a round-up of notable data protection enforcement actions.
Company
Authority
Fine
Comment
Meta Ireland
Irish DPA
1.2 billion
See our coverage of the Irish DPA's decision above.
GSMA
Spanish DPA
200,000
GSMA failed to carry out a data protection impact assessment in relation to a data subject's special category data.
B2 Kapital
Croatian DPA
2.26 million
Representing the Croatian DPA's highest EU GDPR fine to date, B2 Kapital were fined for failing to prevent data security breaches.
Clearview AI
Read more here:
Data Protection update - May 2023 - Stephenson Harwood
DeepMind cofounder warns governments seriously need to find solutions for people who lose their jobs to A.I. – Fortune
Governments will have to find a solution for knowledge sector workers whose jobs are automated away thanks to the advent of artificial intelligence, a leading expert in the field warned.
Mustafa Suleyman, who cofounded the London-based lab DeepMind, later sold to Google in 2014, told attendees of the GIC Bridge Forum event in San Francisco on Tuesday that policymakers needed to step up and provide some form of aid, such as universal basic income (UBI).
That needs material compensation, said Suleyman, according to a report by the Financial Times. This is a political and economic measure we have to start talking about in a serious way.
In March, Goldman Sachs argued generative A.I. that can create content almost indistinguishable from a human, like Midjourney and ChatGPT, could leave 300 million full-time workers across the U.S. and Europe out of a job.
Unquestionably, many of the tasks in white-collar land will look very different in the next five to 10 years, Suleyman continued.
The warning from the DeepMind cofounder, who has since gone on to launch a new startup called Inflection with the aid of LinkedIn billionaire Reid Hoffman, is not the first from a leading mind in the field of tech.
In March, Elon Musk added his name to Steve Wozniaks and a long list of other distinguished signatories pushing for a delay in advanced A.I. research.
They arguedthat decisions about A.I. must not be delegated to unelected tech leaders and that more powerful systems should only be developed once we are confident that their effects will be positive and their risks will be manageable.
OpenAI CEO Sam Altman, meanwhile, has attempted to assuage concerns about his ChatGPT and DALL-E products, arguing a lot of people will be made very rich by A.I. in the process.
Just how many people could also be made extremely poor is the real question that could prove explosiveparticularly for the two countries that DeepMind effectively calls home: the U.K. and the United States.
Should tens of millions of knowledge sector workers or more lose their job through generative A.I., without any real plan to cushion the blow, the resulting upheaval could prove significantly disruptive.
There are going to be a serious number of losers [that] will be very unhappy, very agitated, warned Suleyman.
Excerpt from:
DeepMind cofounder warns governments seriously need to find solutions for people who lose their jobs to A.I. - Fortune
DeepMind Cofounder: Old-School Google Search Will Be Gone in a Decade – Business Insider
DeepMind co-founder Mustafa Suleyman has a chilling warning for Google, his former employer: The internet as we know it will fundamentally change and "old school" Search will be gone in a decade.
"If I was Google I would be pretty worried because that old school system does not look like it's gonna be where we're at in 10 years time," he said during a recent episode of the No Priors podcast.
Suleyman started DeepMind, a pioneering artificial intelligence company, with Demis Hassabis and Shane Legg, in 2010. Google bought it in 2014, and the firm went on to develop ground-breaking inventions, including AlphaFold, an AI model that can predict protein structures. Suleyman left Google a couple of years ago and co-founded a startup called Inflection AI, which recently launched its first product, a personalized chatbot called Pi.
In 2019, Suleyman switched from DeepMind to a VP role at Google. The move followed an internal investigation at DeepMind into allegations that Suleyman had bullied staff.Insider reported that complaints over Suleyman's behavior had been raised for several years. He has apologized and has said that he "really screwed up."
During his final period at Google, Suleyman worked onLaMDA, a large language model. He said he and other colleagues tried to launch a conversational, interactive product using this model, but couldn't persuade Google.
"It wasn't the right timing for Google for various reasons," he said, laughing ruefully. "And I was just like, you know, this has to be out there in the world. This is going to clearly be the new wave of technology."
"The way I positioned LaMDA at Google is that conversation is the future interface. And Google is already a conversation. It's just an appallingly painful one," Suleyman added.
There's a lot for Google to lose if its search engine is forced to change radically. The company is the gatekeeper to the web, crawling, indexing and ranking millions of sites. It makes almost all its profit from running ads alongside search results. It is now experimenting with its own chatbot, Bard, and weaving some of this technology into Search. But no one really knows how it will make as much money from this new format.
With or without Google, the search experience will evolve to be conversational and interactive, Suleyman said on the No Priors podcast. The has huge ramifications for the future of the web and everyone who relies on it to access information and make a living. Here are more highlights from Suleyman's comments:
You say something to Google, it gives you an answer in 10 blue links. You say something about those 10 blue links by clicking on it. It generates that page. You look at that page. You say something to Google by how long you spend on that page, what you click on, how much you scroll up and down, etc, etc. And then you come back to the Search login and you update your query and you say something again to Google about what you saw. That's a dialog, and Google learns like that, and the problem is, it's using 1980s Yellow Pages to have that conversation. And actually now we can do that conversation in fluent natural language.
And I think the problem with what Google has, I guess in a way accidentally, done to the internet is that it has basically shaped content production in a way that optimizes for ads, and everything is now SEO-ed to within an inch of its life. You go on a web page and all the text has been broken out into sub-bullets, and subheaders, and separated by ads, and you spend 5 to 7 or 10 seconds just scrolling through the page to find the snippet that you actually want. Most of the time you are just looking for a quick snippet. And if you are reading, it's just in this awkward format and that's because if you spend 11 seconds on the page, instead of 5 seconds, that looks like high quality content to Google and it's quote-on-quote engaging. So the content creator is incentivized to keep you on that page, and that's bad for us because what we as humans clearly want is high quality succinct fluent, natural language answers to the questions that we want. And then crucially we want to be able to update our response without thinking how do I change my query? We've learned to speak Google. It's a crazy environment. We've learned to Google. That's just a weird lexicon that we've co-developed with Google over 20 years. No. Now, that has to stop. That's over. That moment is done, and we can now talk to computers in fluent natural language and that is the new interface.
We think that in the next few years everyone is going to have their own personal AI. There are going to be many different types of AI. There will be business AIs, government AIs, nonprofit AIs, political AIs, influencer AIs, brand AIs. All of those AIs are going to have their own objective aligned to their owner. Which is to promote something, sell something, persuade you of something. And my belief is that we all as individuals want our own AIs that are aligned to our own interests and on our team and in our corner. And that's what a personal AI is. And ours is called Pi, personal intelligence. It is there to be your companion. We've started off with a style that is empathetic and supportive and we try to ask ourselves at the beginning what makes for good conversation.
I think it's going to change fundamentally. I think that most computing is going to become a conversation. And a lot of that conversation is going to be facilitated by AIs of various kinds. So your Pi is going to give you a summary of the news in the morning. It's going to help you keep learning about your favorite hobby, whether it's cacti or motorcycles. Every couple of days it's going to send you new updates, new information in a summary snippet that really suits exactly your reading style and your interests and your preference for consuming information. Whereas a website, the traditional open internet just assumes there's a fixed format and that everybody wants a single format. And generative AI clearly shows us that we can make this dynamic and emergent and entirely personalized. If I was Google I would be pretty worried because that old school system does not look like it's gonna be where we're at in 10 years time. It's not going to happen overnight. There's going to be a transition but these kinds of succinct, dynamic personalized interactive moments are clearly the future.
An AI is kind of just a website or an app. Let's say you have a blog about baking. You can still produce super high quality content with your AI and your AI will be a lot more engaging and interactive for other people to talk to. So to me, any brand is already kind of an AI. It's just using static tools. For a couple of hundred years, the ad industry has been using color, shape, texture, text, sound and image to generate meaning. It's just they release a new version every six months or every year. Now, that's going to become much more dynamic, and interactive. So I really don't subscribe to this view that there's going to be 1 or 5 AIs. I think this is completely misguided and fundamentally wrong. There are going to be 100s of millions of AIs or billions of AIs. And there will be a line to individuals. So what we don't want is autonomous AIs that operate completely independently and wander off doing their own thing. That doesn't end well. If your blogger has their own AI that represents their content, then I imagine a world where my Pi will go out and talk to that AI and say yeah my Mustafa is super interested to learn about baking, he can't crack an egg, so where does he need to start? And then Pi will have an interaction and be like oh that was really kind of funny and interesting. Mustafa will really like that. And then Pi will come back to me and be like hey I found this great AI today. Maybe we could set up a conversation, you'll find something super interesting. Or they recorded this little clip of me and the other AI interacting and here's a 3 minute video, or something like that. This will be how new content, I think, gets produced. And I think it will be your AI, your Pi, your personal AI that acts as interlocutor accessing the other world. Which is basically, by the way, what Google does at the moment. Google crawls other essentially AIs that are statically produced by the existing methods and has a little interaction with them, ranks them, and then presents them to you.
Read more here:
DeepMind Cofounder: Old-School Google Search Will Be Gone in a Decade - Business Insider
Google is throwing generative AI at everything – MIT Technology Review
Google is stuffing powerful new AI tools into tons of its existing products and launching a slew of new ones, including a coding assistant, it announced at its annual I/O conference today.
Billions of users will soon see Googles latest AI language mode, PaLM 2, integrated into over 25 products like Maps, Docs, Gmail, Sheets, and the companys chatbot, Bard. For example, people will be able to simply type a request such as Write a job description into a text box that appears in Google Docs, and the AI language model will generate a text template that users can customize.
Because of safety and reputational risks, Google has been slower than competitors to launch AI-powered products. But fierce competition from competitors Microsoft, OpenAI, and others has left it no choice but to start, says Chirag Shah, a computer science professor at the University of Washington.
Dont settle for half the story.Get paywall-free access to technology news for the here and now.
Its a high-risk strategy, given that AI language models have numerous flaws with no known fixes. Embedding them into its products could backfire and run afoul of increasingly hawkish regulators, experts warn.
Google is also opening up access to its ChatGPT competitor, Bard, from a select group in the US and the UK to the general public in over 180 countries. Bard will soon allow people to prompt it using images as well as words, Google said, and the chatbot will be able to reply to queries with pictures. Google is also launching AI tools that let people generate and debug code.
Google has been using AI technology for years in products like text translation and speech recognition. But this is the companys biggest push yet to integrate the latest wave of AI technology into a variety of products.
[AI language models] capabilities are getting better. Were finding more and more places where we can integrate them into our existing products, and were also finding real opportunities to provide value to people in a bold but responsible way, Zoubin Ghahramani, vice president of Google DeepMind, told MIT Technology Review.
This moment for Google is really a moment where we are seeing the power of putting AI in peoples hands, he says.
The hope, Ghahramani says, is that people will get so used to these tools that they will become an unremarkable part of day-to-day life.
Googles announcement comes as rivals like Microsoft, OpenAI, and Meta and open-source groups like Stability.AI compete to launch impressive AI tools that can summarize text, fluently answer peoples queries, and even produce images and videos from word prompts.
With this updated suite of AI-powered products and features, Google is targeting not only individuals but also startups, developers, and companies that might be willing to pay for access to models, coding assistance, and enterprise software, says Shah.
Its very important for Google to be that one-stop shop, he says.
Google is making new features and models available that harness its AI language technology as a coding assistant, allowing people to generate and complete code and converse with a chatbot to get help with debugging and code-related questions.
The trouble is that the sorts of large language models Google is embedding in its products are prone to making things up. Google experienced this firsthand when it originally announced it was launching Bard as a trial in the US and the UK. Its own advertising for the bot contained a factual error, an embarrassment that wiped billions off the companys stock price.
Google faces a trade-off between releasing new, exciting AI products and doing scientific research that would make its technology reproducible and allow external researchers to audit it and test it for safety, says Sasha Luccioni, an AI researcher at AI startup Hugging Face.
In the past, Google has taken a more open approach and has open-sourced its language models, such as BERT in 2018. But because of the pressure from the market and from OpenAI, theyre shifting all that, Luccioni says.
The risk with code generation is that users will not be skilled enough at programming to spot any errors introduced by AI, says Luccioni. That could lead to buggy code and broken software. There is also a risk of things going wrong when AI language models start giving advice on life in the real world, she adds.
Even Ghahramani warns that businesses should be careful about what they choose to use these tools for, and he urges them to check the results thoroughly rather than just blindly trusting them.
These models are very powerful. If they generate things that are flawed, then with software you have to be concerned about whether you just take the generated output and incorporate it into your mission-critical software, he says.
But there are risks associated with AI language models that even the most up-to-date and tech-savvy people have barely begun to understand. It is hard to detect when text and, increasingly, images are AI generated, which could allow these tools to be used for disinformation or scamming on a large scale.
The models are easy to jailbreak so that they violate their own policies against, for example, giving people instructions to do something illegal. They are also vulnerable to attacks from hackers when integrated into products that browse the web, and there is no known fix for that problem.
Ghahramani says Google does regular tests to improve the safety of its models and has built in controls to prevent people from generating toxic content. But he admits that it still hasnt solved that vulnerabilitynor the problem of hallucination, in which chatbots confidently generate nonsense.
Going all in on generative AI could backfire on Google. Tech companies are currently facing heightened scrutiny from regulators over their AI products. The EU is finalizing its first AI regulation, the AI Act, while in the US, the White House recently summoned leaders from Google, Microsoft, and OpenAI to discuss the need to develop AI responsibly. US federal agencies, such as the Federal Trade Commission, have signaled that they are paying more attention to the harm AI can cause.
Shah says that if some of the AI-related fears do end up panning out, it could give regulators or lawmakers grounds for action with the teeth to actually hold Google accountable.
But in a fight to retain its grip on the enterprise software market, Google feels it cant risk losing out to its rivals, Shah believes. This is the war they created, he says. And at the moment, theres very little to nothing to stop them.
See the original post here:
Google is throwing generative AI at everything - MIT Technology Review
NRx Pushing Deep into the Mind Science Space BioBuzz – BioBuzz
BioBuzz by Workforce Genetics
By Mark TerryMay 16, 2023
Radnor, Penn.-based NRx Pharmaceuticals is pushing hard on the development of its lead compound, NRX-101, which is currently in a Phase IIb/III clinical trial for Suicidal Treatment-Resistant Bipolar Depression.
Stephen Willard, Chief Executive Officer and Director of NRx, said in a conference call discussing the companys recent quarterly report that the company was off to a great start as we continue to build our brain health franchise.
Among the quarters milestones, the company met with the FDA to discuss its plan to expand the intended use of NRX-101 from the original patient population with acute suicidality who might be treated in the hospital to a broader population with what the company calls Treatment-Resistant Bipolar Depression, or is technically dubbed subacute suicidal ideation. These patients are treated in the outpatient setting and are the target population of the ongoing study.
The risk of suicide is very high in this population, and to the best of his knowledge, NRx is the first company to try to bring therapy to people whose only treatment option is electroshock therapy, Willard said.
The drug is an oral, fixed-dose combination of D-cycloserine and lurasidone. NRX-101 targets the brains N-methyl-D-aspartase (NMDA) receptor. The Phase II STABIL-B trial of the drug in Severe Bipolar Depression with Acute Suicidal Ideation & Behavior (ASIB) showed a significant improvement over existing therapy in decreasing depression and suicidality compared to placebo when patients received the drug after a single dose of ketamine.
Based on the STABIL-B results, the FDA granted a Special Protocol Agreement and Breakthrough Therapy Designation for NRX-101 in patients with Severe Bipolar Depression with ASIB.
The company also received a Biomarker Letter of Support. These letters describe the FDA Center for Drug Evaluation and Research (CDER)s thoughts on the potential value of a biomarker while encouraging continued evaluation. It doesnt qualify the biomarker or endorse a specific biomarker test or device. The goal is to enhance the visibility of the biomarker, encourage data sharing, and potentially stimulate more studies.
Willard indicated they believe there is potential for commercial launch in 2024 and were on track for Phase IIb/III data later this year.
The companys meeting with the FDA in the quarter also resulted in guidance from the agency for the completion of NRxs manufacturing for Phase III/commercial stage investigational product. This resulted in the Phase IIb/III trial to be upgraded, with the potential for use as a registrational filing.
In the first quarter, the trials independent Data Safety Monitoring Board (DSMB) reviewed safety and unblinded efficacy data in the first 50 patients. Willard said, There was no futility signal at this time. And the DSMB recommended that the trial continue as planned.
In addition, NRx refined the way it validates the psychometric rating used to evaluate the efficacy endpoint in the trial. It depends upon a team of veteran rates who train independent site rates as well as monitor the technical quality of each rating. It set a standard of 90% or better concordance between its veteran rating team and site raters. The standard was met for all study participants whose ratings were obtained in their primary language and management believes that this standard can be maintained for the duration of the trial.
In April, NRx contracted with 1nHealth to broaden recruitment, Willard said. The recruitment could cover up to 45 U.S. states as part of the enlarged study. It also broadened its previously disclosed relationship with RTP-based Science 37, a CRO that focuses on decentralized clinical trials. This agreement plans to enroll participants identified via 1nHealths recruitment initiative and randomized them for inclusion in the broadened study.
Willard also reported the company is planning to study NRX-101 in post-traumatic stress disorder (PTSD). It hopes to begin planning for a Phase II trial in the second quarter for this indication, with plans to open enrollment this year.
Jonathan Javitt, company Founder and Chief Scientist, said in the conference call, We do anticipate filing the IND this year and from a registration perspective, in other words, sample size, we wont know until we have discussions with the FDA.
He added that the endpoints in PTSD are different than for bipolar depression, but there were overlaps in symptoms, including depression and suicidality.
For the first quarter, NRx reported $3.7 million in R&D expenses compared to $5.5 million in the same period in 2022. The drop of $1.8 million was primarily due to a reduction in clinical trials and development expenses associated with Zyesami. Zyesami was provided under the FDAs Expanded Access Protocol to treat Critical COVID-19 patients with respiratory failure. In May 2022, the DSMB recommended halting analysis of the drug in this patient population due to futility.
Seth Van Voorhees, Chief Financial Officer and Treasurer of NRx, said, that at the end of the quarter, the company had $16.5 million in cash and cash equivalent, which is expected to fund the companys operations through readouts in the fourth quarter.
Willard concluded, saying, The past quarter has been incredibly productive, and we are uniquely positioned for success in 2023.
Mark Terry is a freelance writer, editor, novelist and ghostwriter. He holds a degree in microbiology & public health and spent 18 years in infectious disease research and clinical and research genetics prior to his transition to a writing career. His areas of expertise include biotechnology, pharma, clinical diagnostics, and medical practice management. He has written literally thousands of articles, as well as market research reports, white papers, more than 20 books, and many other written materials. He currently lives in Michigan with his family.
See the article here:
NRx Pushing Deep into the Mind Science Space BioBuzz - BioBuzz
Mars In Scorpio: What This Astrological Placement Really Means … – mindbodygreen
Nowadays,Scorpio is ruled by Pluto, but back in the Hellenistic astrology days, it was actually ruled by Mars. That said, Mars feels right at home in Scorpio, and it makes senseScorpio is the sign that rules sex, and Mars is definitely a passionate planet.
Then take Pluto, the planet of transformation and rebirth, and we start to get a clear picture on how having a Scorpio Mars results in intensity and unmatched depth.
As astrologerDesiree Roby Antilapreviously told mindbodygreen, folks with Mars in Scorpio are relentless, with an ability to get through, around, over, and under any obstacle. "They have a really strong drive, but again, they're fixed, so whenever Mars is in a fixed sign, it tends to make the person pretty stubborn," she notes.
This can look like having a hard time letting things go, holding grudges, being vengeful, etc., which is telltale Scorpio energy. They also have a tendency to push themselves to the point of burnout, which is something to watch out for.
And according to astrologerMolly Pennington, Ph.D., a Scorpio Mars placement will likely fly under the radar with an air of mystery. "People might just walk up to them not knowing that they're dealing with a Scorpionand it's about to sting," she explains.
This is all part of the Scorpio Mars strategy of protecting and guarding itself against vulnerability. "They have this deep, emotional side, almost like wars of the heart that are deep down underneath it all," Pennington says, adding that those wars are often long lasting, and they bring a "battlefield" mindset into all areas of their life.
Take Taylor Swift as a classic example of Scorpio Mars energy. She has a Scorpio Mars, and her insistence on rerecording all of her music to get the rights under her own name is a prime example of this placement's willingness to win at any cost.
Excerpt from:
Mars In Scorpio: What This Astrological Placement Really Means ... - mindbodygreen
Diana Gabaldon, mind behind the ‘Outlander’ series, to speak at … – KUNM
Diana Gabaldon, the mind behind the popular "Outlander" and Lord John Grey series of novels, will be a featured speaker at the Santa Fe International Literary Festival, which runs May 19-21.
Gabaldon spoke with KUNM about the connections between fiction and reality and her familys deep roots in Belen, New Mexico.
DIANA GABALDON: My dad was born in Bethlehem [Beln, NM] as the youngest of 13 children to a very important New Mexico dirt farmer who died three months after he was born. So, he grew up in what you might call dire poverty, as they didnt have enough to eat. He ended up being the only member of his family to go to college and graduate and then later became a state senator in Arizona.
KUNM: I want you to tell me a little bit about your wildly different life you had before writing fiction.
GABALDON: I was a biologist and a scientist in general. At the age of eight, I knew better than to tell my parents that I wanted to write novels, because my father, from his upbringing and so forth, was deeply conservative. He would have said, you know, "Don't do that! You could never make any money doing that! Do something else!" I wasn't going to put up with that, so I just didn't tell him. So, I went into science. I liked science. I was good at it. I enjoyed it. But I knew I was supposed to write novels.
At the age of 35, I had been thinking about it casually for several years, I was going to start writing a novel. Just to learn how. This is not for publication. I'm not going to show it to anyone and tell anyone what I'm doing, let alone my husband because he would have tried to stop me.
KUNM: Well, to be fair to you, your "Outlander" series is now a big hit across the US and even across the pond. Why do you think people are so captivated with tales of romance and fantasy like yours?
GABALDON: Oh, well, those are both very, very old story forms. And it's reasonable enough these both deal with: What are we? How do we complete ourselves? What are we looking for in life? Most people are looking for a stable relationship, whatever it's for, maybe they want to have families whatever form those take.
KUNM: What connections do you make personally, when you look back at your writing, and what's currently going on right now?
GABALDON: Human chaos is basically something that I deal with all the time. And, you look at what's going on on television. You know, aside from the introduction of technology, things have not changed that much. People still want to stone each other for believing the wrong thing. People still want to shriek and carry on and gibber. I mean, watching people on TV having protests in the street, and so forth. It's not that different from what you see in the jungle. Human behavior is instinctive, rather than reasoned. It's all too easy for people to abandon their reason, and just behave instinctively. And instinct is a very short fuse kind of thing.
KUNM: What do you think your take-home message is for someone when they pick up one of your novels?
GABALDON: I would hope they take from it a sense of the innate goodness of people. I get people who see something in the book and frequently, it speaks to them on a very deep and visceral level.
I wrote the first book, as I say, for practice, I wasn't planning to publish it. So when it did get published, I was wondering how people would respond to it. Because I didn't pull any punches at all. I said, "If you're gonna write this book, it's got to be honest," and so I was honest. And consequently, it's a very powerful book. It has some very dark substances in it here and there. I wondered what people were going to do about this... Were they going to burn the book? Ban me, etc?
I got an immense number of messages and letters from people who had had terrible sexual experiences Who had suffered, rape, torture, or whatever. But what they all said was: "Thank you so much for writing this." They said, "It is immensely cathartic to see this approached in this way so honestly, and I could see myself in the story, and it relieved me of my guilt... I realized it wasn't my fault"
So, it's very moving when people respond that way. But, it's not something that I could have foreseen happening. When you write a book, you just go into it as honestly as you can. And you tell the truth.
If you or someone you know has experienced sexual assaultcall the National Sexual Assault Hotline at 1-800-656-4673.
See the rest here:
Diana Gabaldon, mind behind the 'Outlander' series, to speak at ... - KUNM
Deep neural networks used to perfect Alexas Indian avatar – Times of India
When Alexa was launched in the US in 2014, there was nothing like it. The Star Trek inspired voice assistant in no time became a household name. By 2019, the number of American parents who named their child Alex dropped by two-thirds. It was unofficially a name reserved for robots.But bringing this technology to India, Amazon knew, would require a lot of backend re-engineering. There were linguistic challenges. There were challenges of fitting Alexas personality into the Indian context.Snehal Meshram, senior manager for Alexa AI-Natural Understanding, says at the core of Alexas software and hardware is a system called the spoken language understanding system (SLU). This system is what allows Alexa to understand and respond to human voices.The SLU is made of three primary components: ASR (automatic speech recognition), NLU (natural language understanding), and TTS (text to speech). The first two systems had to be built in India, says Snehal, especially since they needed people who could speak Indian languages.ASR systems use a combination of algorithms, statistical models, and machine learning techniques to analyse and transcribe spoken words and phrases. That process involves steps like audio capturing, pre-processing to remove noise, acoustic modelling. The India team had to do a lot of work to ensure that Alexa understood the differences in dialect, accents, intonations.Alexa devices in India also had to deal with a range of acoustical environments not typically seen in the US. Background noise, for instance, is a lot more in India.This meant the ASR systems had to understand how to suppress background noise, how to detect sounds more accurately, and leverage beamforming technologies to have a sharper understanding of user requests. In the end, the team had to leverage neural ASR technology to improve the system. Neural ASR technology leverages deep neural networks, allowing us to build large language models and acoustic models, and leverage that technology to then improve the systems, Snehal says. Using this, the team was able to reduce ASR errors by over 25% last year.Once Alexa has converted the speech wave into a text input using ASR tech, the voice assistant needs to understand the users intention. This is where NLU comes in. If a user says an accident has left some serious marks on a car, this is very different from a sentence like ?his marks in the exam were not that good. The word marks here represents two different contexts and two different intents. NLU models interpret the text input, and we assign intent to every slot and every entity that we hear from the user. Weve driven a lot of algorithmic improvements here as well, says Snehal.Finally, Alexa needs to respond to the user, and she must do so in a manner different from her American counterpart. In fact, this is an entity that speaks English with an Indian accent. Snehal says a lot of regional phonetics information has gone into making sure that Indian languages and accents are perfected. Alexa responds in Hindi as well as in Indian English. Here as well, we leverage a lot of our neural technology to make sure that we are continuously learning and continuously improving the accuracy, she says.Making sure Alexa has an Indian personality was another monumental task. Seema Somshekar, managing editor for Alexa India, says that just like a person, Alexa has been programmed to have likes, opinions, preferences, a sense of humour, and even the ability to sing songs, even if it means its not pitch perfect. Indian contextual experience is what our customers will be looking for, she says.To enable that, Seemas team had to help Alexa learn, understand and be aware of things that Indians love. We needed to keep in mind the diversity of the country, the way in which people speak, the cultural nuances of the geography, she says.
Continue reading here:
Deep neural networks used to perfect Alexas Indian avatar - Times of India
This Japanese baseball teams Deep-V uniforms are the future of sports attire – SB Nation
This is the sexy future weve been waiting for.
Nobody, and I mean nobody, is taking a more bold approach to sports uniforms than the Hokkaido Nippon-Ham Fighters of Japans NPB league. This weekend the team debuted their new look, designed by legendary eccentric manager Tsuyoshi Shinjo, and you are not ready.
The Nippon-Ham Fighters, who have been clad in blue, white, and gold since their inception, flipped everything on its head by moving to red, gold, and black looking like something out of a futuristic disco anime about dance-battling robots. Naturally this came from the mind of the teams manager, who prefers to be called BIGBOSS.
Yes, this is real. Yes, he actually enjoys being called BIGBOSS in all-caps. Yes, we have written about him before specifically when he rode onto the field using a hoverbike, because nobody is saying no to BIGBOSS.
The new deep-v uniforms are something wed never seen in the U.S. Our uniforms are far too corporate and sanitized. Everything needs to be approved by 50 different apparel executives, most leagues have weird uniform rules about what is or isnt acceptable, and then the only tweak is slapping the logo of some crypto company on them that will be bankrupt in 2-3 years and run away with everyones money in a rug pull.
Its nice to know that somewhere in the world uniforms are still beautiful, pure, and reveal an alluring amount of chest. Thank you to the Nippon-Ham Fighters and BIGBOSS for making this a reality.
Read more
Read more:
This Japanese baseball teams Deep-V uniforms are the future of sports attire - SB Nation