Page 633«..1020..632633634635..640650..»

The Biden EO on AI: A stepping stone to the cybersecurity benefits of … – SC Media

While the Biden administrations executive order (EO) on artificial intelligence (AI) governs policy areas within the direct control of the U.S. governments executive branch, they are important broadly because they inform industry best practices and subsequent laws and regulations in the U.S. and abroad.

Accelerating developments in AI particularly generative AI over the past year or so has captured policymakers attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) has further heightened attention in Washington. In that context, we should view the EO as an early and significant step addressing AI policy rather than a final word.

Given our extensive experience with AI since the companys founding in 2011, we want to highlight a few important issues that relate to innovation, public policy and cybersecurity.

Like the technology it seeks to influence, the EO itself has many parameters. Its 13 sections cover a broad cross-section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, theres significant attention to the nexus between AI and cybersecurity, and thats covered at some length in Section 4.

Before diving into specific cybersecurity provisions, its important to highlight a few observations on the documents overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but were encouraged by several attributes of the document.

First, in numerous areas of the EO, agencies are designated as owners of specific next steps. This clarifies for stakeholders how to offer feedback and reduces the odds for gaps or duplicative efforts.

Second, the EO outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through request for comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the EO tasks existing or establishes new advisory panels to integrate structured stakeholder feedback on AI policy issues.

Third, the EO mandates a brisk progression for next steps. Many EOs require agencies to finish tasks in 30 or 60-day windows, which are difficult for them to meet at all, let alone in deliberate fashion. This document in many instances spells out 240-day deadlines, which should allow for 30 and 60-day engagement periods through the RFCs.

Finally, the EO states plainly: as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI. This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If we can use history as a guide, its easy to imagine a scenario where a talented, junior staffer at a given agency identifies a good way to leverage AI at some time next year that no one could easily forecast this year. Its unwise to foreclose that possibility, as we should encourage innovation inside and outside of government.

On cybersecurity, the EO touches on a number of important areas. Its good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.

One section of the EO attempts to reduce risks of synthetic content: generative audio, imagery and text. Its clear that the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, well need to innovate solutions to this problem. And with elections around the corner, we hope to see rapid advancements in this area.

Its clear the EOs authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1), the Secure Software Development Framework, and the Blueprint for an AI Bill of Rights. This will reduce risks associated with establishing new processes, while allowing for more coherent frameworks for areas where theres only subtle distinctions or boundaries between, for example, software, security and AI.

The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. It mandates the following:

Within 90 days of the date of this order, and at least annually thereafter relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks, and shall consider ways to mitigate these vulnerabilities.

While its important language, we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.

This EO represents an important step in the evolution of U.S. AI policy. Its also very timely. As we described in our recent testimony to the House Judiciary Committee, AI will drive better cybersecurity outcomes and its also of increasing interest to cyber threat actors. As a community, well need to continue to work together to ensure defenders realize the leverage AI can deliver, while mitigating whatever harms might come from the abuse of AI systems by threat actors.

Drew Bagley, vice president of cyber policy, CrowdStrike; Robert Sheldon, senior director, public policy and strategy, CrowdStrike

Read more here:

The Biden EO on AI: A stepping stone to the cybersecurity benefits of ... - SC Media

Read More..

Cloud security continues to give IT managers headaches. Here’s why – SiliconANGLE News

Cloud security continues to vex corporate information technology managers, and new research indicates that the problems are both widespread and not easily fixable, thanks to a number of weak areas.

In many cases, the procedures to secure cloud workloads has been well-known for years but arent always applied consistently or reliably. Some old chestnuts, such as cross-site scripting and SQL injection attacks on web servers, still account for almost half of todays cloud vulnerabilities, for example.

The problems cover the waterfront and arent just structural issues. Secondary issues such as security alerts take too much time to resolve, and risky behaviors fester without any real accountability to prevent or change.

SiliconANGLE examined four cloud security reports that address these issues:

The reports show that despite reams of details on best security practices, organizations dont do well with their implementation, follow-through or consistent application. For example, consider well-known practices such as the usage of complex and unique passwords, collection of access logs and avoidance of hard-coded credentials.

Unit 42 states what should be obvious by now, that hard-coded credentials pose significant security risks because adversaries can use them to bypass most of the defense mechanisms. Yet it found that more than 80% of organizations still used them.

A similar majority of accounts analyzed in its report doesnt turn on the logging and auditing features across Amazon Web Services CloudTrail, the Microsoft Azure key vault audit logging and Google Cloud Platform Storage Bucket logging services.

The situation is slightly better when it comes to enforcing another best-practice safeguard: multifactor authentication. Even for cloud-oriented businesses, MFA has been slowly adopted within organizations. Datadogs research found that 45% of AWS organizationshad one or more users authenticate their main command consoleswithout using MFA.

Worse, only20% of Azure organizationshad all of their Azure Active Directory users authenticate with MFA. Unit 42s research concurs, with these findings: At least three-quarters of organizations dont enforce MFA for console users, and more than half of organizations dont enforce MFA for root/admin users. All of these numbers are pretty dismal, given the widespread dictums for MFA that have appeared along with the numerous breach statistics of accounts that relied on less secure methods.

Speaking of security credentials, Datadogs report found that static, long-lived credentials still cast a long shadow, and eliminating them has proven difficult. It found that across the three major cloud providers,roughly half of access keys are more than a year old, and more than one in 10 are more than three years old. This demonstrates that access keys tend to live for longer than they should, and many access keys arent being used and still havent been deprovisioned, the authors wrote.

IBMs X-Force team agreed with these statistics: It discovered plain-text credentials located on user endpoints in 33% of engagements involving cloud environments.

Datadogs report identified two other major issues:

When these technical challenges are combined with bad behaviors, cloud security becomes more difficult to enforce. As Illumios report said, The vast majority of organizations that use cloud-based services need more efficiency, visibility and capabilities to reduce risks in their environment and the survey found that nearly half the data breaches suffered over the past year originated in the cloud.

Part of the problem, according to Unit 42s research, is the difference between cloud and on-premises security: Traditional digital forensics and incident response techniques are not designed to handle these types of events because the tooling, processes, and data sources necessary for investigating security incidents are very different between on-premises and cloud environments.

Illumios report contains some dire language: Todays cloud security solutions are continuing to fail when it comes to safeguarding companies against cybercriminals who regularly cause massive disruption by exfiltrating data and demanding exorbitant ransoms.

Two solid recommendations come from the IBM report: Engage in adversary simulation exercises using cloud-based scenarios to train and practice effective cloud-based incident response. And use AI capabilities to help scrutinize digital identities and behaviors, verify their legitimacy and deliver smarter authentication.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Read more from the original source:
Cloud security continues to give IT managers headaches. Here's why - SiliconANGLE News

Read More..

CJI DY Chandrachud speaks on AI, poses question on ethical treatment of these technologies – HT Tech

While the world is busy developing artificial intelligence (AI) to take it to artificial general intelligence (AGI) stage, not many are giving a thought to the interpersonal relationship that is being created between humans and this emerging technology. AGI is human-leven intelligence, and thus, it can also empower AI to develop some level of consciousness. Speaking at a conference, Chief Justice of India D.Y. Chandrachud on Saturday highlighted the fundamental questions about AI - ethical treatment.

Addressing the plenary session of the 36th 'LAWASIA' conference virtually, the CJI spoke on "Identity, the Individual and the State - New Paths to liberty". LAWASIA is a regional association of lawyers, judges, jurists and legal organisations, which advocates for the interests and concerns of the Asia Pacific legal progression.

Citing English philosopher John Stuart Mill's book on Liberty published in 1859, the CJI said the author discussed the historical struggle between liberty and authority describing the tyranny of the government which in his view needs to be controlled by the liberty of citizens, reported PTI. Mill devised this control of authority into two mechanisms. Firstly, necessary rights belonging to the citizens, and secondly there must be constitutional checks for the community to consent to the impacts of the governing path, according to him.

The idea of liberty, PTI Chief Justice said, can be summarised in the following phrase: "Your right to swing your fist ends where my nose begins. He also spoke about how in the digital age "we are faced with several fascinating aspects of Artificial Intelligence. There is a complex interplay between Artificial Intelligence (AI) and personhood where we find ourselves navigating uncharted territories that demand both philosophical reflection and practical considerations."

In contemplating the intersection of AI and personhood, "We are confronted with fundamental questions about the ethical treatment of these technologies...." He cited an example of a human robot (Sophia) which was granted citizenship (in Saudi Arabia) and said, "We must reflect on whether all humans who live, breathe and walk are entitled to personhood and citizenship based on their identity."

Noting that liberty is the ability to make choices for oneself and change the course of life, the Chief Justice said identity intersects with the person's agency and life choices. "As lawyers, we are constantly confronted with this intersection and the role of the State to limit or expand the life opportunities of the people. While the relationship between the state and liberty has been understood widely, the task of establishing and explaining the relationship between identity and liberty is incomplete," he said.

Traditionally, liberty has been understood as the absence of State interference in a person's right to make choices. However, contemporary scholars have come to the conclusion that the role of the State in perpetuating social prejudices and hierarchies cannot be ignored, Chief Justice Chandrachud said.

"In effect, whether the state does not intervene, it automatically allows communities with social and economic capital to exercise dominance over communities who have been historically marginalised." He also said people who face marginalisation because of their caste, race, religion, gender, or sexual orientation will always face oppression in a traditional, liberal paradigm. This empowers the socially dominant.

The Chief Justice also stressed: "We must broaden our perspectives. The notion of popular sovereignty, for example, inherently demands the inclusion of pluralism and diversity at its core." In India, he said, affirmative action has been prescribed and even mandated by the Constitution of India in the context of Scheduled Castes, Scheduled Tribes, and Backward Classes," he said.

(With inputs from PTI)

Read the original post:

CJI DY Chandrachud speaks on AI, poses question on ethical treatment of these technologies - HT Tech

Read More..

6 green coding best practices and how to get started – TechTarget

From server closets within businesses to massive server farms underpinning cloud service providers, data centers run applications all over the world. All that processing, writing to memory and other activities use a lot of power, as well as generate a lot of waste heat. While data center operators can tackle green initiatives at the physical hardware and facilities level, developers and testers can contribute with green coding.

Learn green coding techniques to program software with sustainability in mind, from reduced artifacts to slimmed-down CI/CD pipelines to solar-powered coding. Use the various benefits of a sustainable approach to development to get teams involved and start making change.

Anyone who has used a package manager, such as npm or HomeBrew, knows a major package installation often requires multiple minor packages as well. And those minor packages have dependencies. As best you can, limit the number of artifacts involved.

Operating systems vary widely in requirements. For example, Linux Mint, a popular Linux distribution based on Ubuntu and Debian, recommends 100 GB of disk space for comfortable operation. Alpine Linux, by contrast, is a 5 MB image built around musl libc and BusyBox for a Docker and Kubernetes containerized environment.

IT organizations can reduce disk, memory and processing demands by considering the software for a given purpose. Curate a list of dependencies to do just what is needed. Compile binaries with just the needed dependencies. Consider cloud-based and virtualized deployments, as many virtual servers can run on shared physical hardware.

How much processing power should software use? In one school of thought, programmers build software to use all the available processing power. This approach assumes that advances in computing hardware will enable the software to run faster when it comes to market or enters production. Joel Spolsky, founder of Trello, Stack Overflow and others, promoted the idea that bloatware is good as far back as 2001.

Computer science schools, on the contrary, teach topics like memory consumption and Big O notation. Big O notation is a method to calculate the amount of processing power used by an algorithm. Big O optimization aims for processing demand to grow more slowly as the number of elements in a list, or leaves in a tree structure, increases. It might be time to give Big O notation another look.

Other Lean coding methods include using lower-resolution photos on the internet and moving lookups from databases to in-memory caches. Open source NoSQL databases such as MongoDB, Couchbase and Redis store and retrieve common information with less processing power than a relational database.

While opting for SaaS dev and test tools may be generally more efficient than installing them to run on servers, cloud apps can still suffer from bloat. Modern DevSecOps tools often create full test environments and run all automated checks on every commit. They can also run full security scans, code linters and complexity analyzers, and stand up entire databases in the cloud. When the team merges the code, they do it all over again. Some systems run on a delay and automatically restart, perhaps on dozens of monitored branches.

Observability tools to monitor everything can lead to processing bloat and network saturation. For example, imagine a scenario where the team activates an observability system for all testing. Each time network traffic occurs, the observability system messages a server about what information goes where -- essentially doubling the test traffic. The energy consumed is essentially wasted. At best, the test servers run slowly for little benefit. At worst, the production servers are also affected, and outages occur.

An audit of exactly how much activity takes place in the pipeline could yield both processing savings and lower consumption. For example, the team could have the build process produce a cloud-native image, yet do initial testing locally on a laptop.

Monolithic systems can require an expensive regression testing process on every build. Even if entirely automated, the build and automated check process will be largely redundant, redoing what has been done before, for every build. Microservices offer the opportunity to change, test, deploy and monitor just one section of the code at a time.

Other modern approaches include languages such as PHP where the code can be deployed one web page at a time. There are also tools that take interpreted languages like PHP and compile them, resulting in smaller files that run with less CPU and memory use.

Commonly, teams that subscribe to the continuous delivery approach actually have a three- to four-day cycle time between code commit and deployment. That's likely due to human delay and processing that may be redundant. An analysis of the CI/CD pipeline could yield both time and processing savings.

The ideas above focus on coding, development and test practices, but there are also similar changes operations teams can make.

While green coding largely focuses on software design and operation, developers and testers can make an individual difference for the environment, as can their employers. Short or nonexistent commutes reduce emissions from vehicles, as does choosing a green vehicle. Solar, wind and other renewable energy sources can power homes and businesses. Companies can provide incentives for their employees to work in this way. Many governments offer tax benefits and other rewards for sustainability efforts at the corporate and individual levels.

One great advantage of green coding practices is that you can start both globally and locally. Any individual worker can brainstorm ways to reduce the footprint of the systems and code. The team can commit to reviewing the CI/CD pipeline or to look into architecture best practices.

All it takes is someone to get started. Changes to a CI/CD pipeline, analyzing code practices, adding containers and refactoring microservices are all in the hands of a software development team.

Additionally, send larger, architectural green ideas up to leadership. To get buy-in for these initiatives, highlight the benefits to the business, which include the following:

Matt Heusser is managing director at Excelon Development, where he recruits, trains and conducts software testing and development. The initial lead organizer of the Great Lakes Software Excellence Conference and lead editor of "How to Reduce the Cost of Software Testing," Matt served a term on the board of directors for the Association for Software Testing.

Link:
6 green coding best practices and how to get started - TechTarget

Read More..

Nebulon continuing to focus on OEM channel-building | Microscope – ComputerWeekly.com

Nebulon has identified a growing opportunity to deliver public cloud services to a commercial audience through the channel.

The smart infrastructure player operates a model of selling products via OEM partners and their channels, including HPE, Lenovo, Dell and Super Micro.

Nebulon is pitching a controller, offering storage, network services and cyber security services, that can be plugged into servers and provide the functionality public cloud customers have been enjoying for the past few years.

Craig Nunes, chief operating officer and co-founder of Nebulon, said there was a gap in the market that could only be plugged through OEM hardware partners and the channel. What we want to do in partnership with our server providers, with our channel, is bring that hyperscale datacentre model to enterprises, he said.

Nunes added that the vendor was able to work with the security channel and those who were working with customers at the edge.

For those partners who are bringing a security value to their customers at the network and application layer, we offer a way to drive that down deeper into the infrastructure, inside the server, to protect ransomware attacks on the server operating system and recover within minutes, he said.

We have observed [the edge] is probably the fastest growing part of the enterprise, added Nunes. Its an area where I think customers need a lot of help from their partners, either from a design or ongoing management perspective. Whether its just teaming with a partner around design and deployment of an edge opportunity or the ongoing management you know, within that customers private cloud of that edge deployment it is a great opportunity.

Another area of interest is artificial intelligence (AI), and Nunes has noted there are a lot of partners trying to work out how they can bring additional value to AI infrastructure.

We have different solution blueprints that they can take to their customers, he added. Its for folks trying to find their place in serving AI infrastructure and managing AI infrastructure for their customers, a way to take advantage of that.

As well as arming the channel with attractive technology, the other focus has been on continuing to ensure those selling via OEMs are recognised and rewarded.

Back in September 2021, the firm launched its smartPartner programme, with the aim of supporting resellers and those that want to add further value to an OEM solution. Nunes said those efforts had been continuing over the past couple of years to foster further channel growth.

We focus on the partners who also are transforming themselves and those that see the value, he said, adding that they were already able to see a fairly substantial opportunity.

The most successful partners we have are in early at the design of the project, and they are also being asked to own it over time, said Nunes. Theyre sticking around for the management and maintenance of that, and for those partners, they are making a tremendous business. Theyre an extension of those customer accounts.

Read more from the original source:
Nebulon continuing to focus on OEM channel-building | Microscope - ComputerWeekly.com

Read More..

Will AI Replace Humanity? – KDnuggets

We are living in a world of probabilities. When I started talking about AI and its implications years ago, the most common question was is AI coming after us?

And while the question remains the same, my response has changed regarding probabilities. It is more likely to replace human judgment in certain areas, so the probability has increased over time.

As we discuss a complex technology, the answer will not be straightforward. It depends on several factors, such as what it means to be intelligent, whether we suggest replacing jobs, anticipating the timelines for Artificial General Intelligence (AGI), or identifying the capabilities and limitations of AI.

Let us start with understanding the definition of Intelligence:

Stanford defines intelligence as the ability to learn and perform suitable techniques to solve problems and achieve goals appropriate to the context in an uncertain, ever-varying world.

Gartner describes it as the ability to analyze, interpret events, support and automate decisions, and take action.

AI is good at learning patterns, however, mere pattern recognition does not qualify as intelligence. It is one of the aspects of the broader spectrum of multi-dimensional human intelligence.

As experts believe, AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present, and the future; of history, injury or nostalgia. Without that, theres no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the intelligence part.

Some might refer to AI clearing tests from prestigious institutes and, most recently, the Turing test as a testament to its intelligence.

For the unversed, the Turing test is an experiment designed by Alan Turing, a renowned computer scientist. According to the test, machines possess human-like intelligence if an evaluator cannot distinguish the response between a machine and a human.

A comprehensive overview of the test highlights that though Generative AI models can generate natural language based on the statistical patterns or associations learned from vast training data, they do not have human-like consciousness.

Even advanced tests, such as the General Language Understanding Evaluation, or GLUE, and the Stanford Question Answering Dataset, or SQuAD, share the same underlying premise as that of Turing.

Let us start with the fear that is fast becoming a reality will AI make our jobs redundant? There is no clear yes or no answer, but it is fast approaching as the GenAI casts a wider net on automation opportunities.

McKinsey reports, By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automateda trend accelerated by generative AI.

Profiles like office support, accounting, banking, sales, or customer support are first in line toward automation. Generative AI augmenting the software developers in code writing and testing workflows has already impacted the job roles of junior developers.

Its results are often considered a good starting point for an expert to enhance the output further, such as in making marketing copy, promotional content, etc.

Some narratives make this transformation sound subtle by highlighting the possibility of new job creations, such as that of healthcare, science, and technology in the near to short term; and AI ethicists, AI governance, audits, AI safety, and more to make AI a reality overall. However, these new jobs can not outnumber those being replaced, so we must consider the net new jobs created to see the final impact.

Next comes the possibility of AGI, which, similar to the multiple definitions of intelligence, warrants clear meaning. Generally, AGI refers to the stage when machines gain sentience and awareness of the world, similar to a human's.

However, AGI is a topic that deserves a post on its own and is not under the scope of this article.

For now, we can take a leaf from the diary of DeepMinds CEO to understand its early signs.

Looking at a broader picture, it is intelligent enough to help humans identify patterns at scale and generate efficiencies.

Let us substantiate it with the help of an example where a supply chain planner looks at several order details and works on ensuring the ones at risk of being met with a shortfall. Each planner has a different approach to managing the shortfall deliveries:

As an individual planner could be limited with its view and approach to managing such situations, machines can learn the optimal approach by understanding the actions of many planners and help them automate easy scenarios through their ability to discover patterns.

This is where machines have a vantage point over humans limited ability to simultaneously manage several attributes or factors.

However, machines are what they are, i.e., mechanical. You can not expect them to cooperate, collaborate, and develop compassionate relationships with the teams as empathetically as great leaders do.

I frequently engage in lighter team discussions not because I have to but because I prefer working in an environment where I am connected with my team, and they know me well, too. It is too mechanical to only talk about work from the get-go or try to act as it matters.

Take another instance where a machine analyzes a patients records and discloses a health scare as-is following its medical diagnosis. Compare this with how a doctor would handle the situation thoughtfully, simply because they have emotions and know what it feels like to be in a crisis.

Most successful healthcare professionals go beyond their Call of Duty and develop a connection with the patient to help them through difficult times, which machines are not good at.

Machines are trained on data that could capture the underlying phenomenon and create models that best estimate them.

Somewhere in this estimation, the nuances of specific conditions get lost. They do not have a moral compass, similar to a judge has when looking at each case.

To summarize, machines may learn patterns from data (and the bias that comes with it) but do not have the intelligence, drive, or motivation to make fundamental changes to handle the issues plaguing humanity. They are objective-focused and built on top of human intelligence, which is complex.

This phrase sums up my thoughts well AI can replace human brains, not beings.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

The rest is here:

Will AI Replace Humanity? - KDnuggets

Read More..

India’s approach to regulating AI is good, says Andrew Ng | Mint – Mint

Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI--an edtech company, founder & CEO of Landing AI--a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford Universitys Computer Science Department.

Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI--an edtech company, founder & CEO of Landing AI--a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford Universitys Computer Science Department.

In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:

Hi! You're reading a premium article

In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:

Sam (Altman, CEO of OpenAI) was my student at Stanford. He interned at my lab. I think he's been a great leader. What happened was pretty tragic and it could have been avoided (the interview was conducted a day prior to Altman returning as CEO of OpenAI). OpenAI has many valuable assets, and reportedly more than $1 billion in annualised revenue, many customers, and a phenomenal product. But its governance structure is now very much discredited. Earlier, there were all sorts of arguments about why a nonprofit structure is preferable, but this incident will make investors shy away from the clever arguments for very innovative governance structures.

For a lot of jobs, Gen AI can augment or automate just a small fraction of it--let's say 20% of someone's job could be automated using GenAI. That means that it's beneficial both to businesses and to individuals, but we need to figure out which 20% can be automated, and then use GenAI to get that productivity boost. I'm not minimising the suffering of the much smaller number of people whose jobs will be fully automated. I think we owe it to them (those impacted) and create a safety net for them. But in the vast majority of cases, AI today is good enough only to automate part of someone's job. And that often means that people in that job who use AI will replace people who don't.

Fewer Asian countries have been caught up with the AI extinction hype. Its more of a European thing. The most-widely accepted definition of AGI is that AI would do any intellectual task that a human could do. I think we're at these decades away from that--maybe 30-50 years away. It turns out that there's a number of companies and people who are optimistic about achieving AGI in 3-5 years.

But if you look carefully, many of them have been changing the definition of AGI, and thus are quickly lowering the bar. If we ask: Is the machine sentient, or self-aware, it will be a philosophical question. And I don't know the answer to it because it's not a scientific question. But imagine if we were to set a very low bar--some very simple test to declare machines sentient, it would lead to very sensational news articles saying machines are sentient. So, I'm not sure whether coming up with a very narrow technical definition is a good thing.

We need good regulations on AI, and clarity on how we should or should not take any AI to areas such as healthcare, etc. The EU (European Union) and AI Act was thoughtful in some places and flawed in some places. It's a good idea to take a tiered approach to AI risk like using AI for screening people for jobs -- that's high risk, so let's make sure to mitigate that risk.

Unfortunately, I'm seeing much more bad regulation around the world rather than good regulation. I think the US White House executive order is a bad idea in terms of starting to put your burdensome reporting requirements on people training large models. It will stifle innovation, because only large tech companies will have the capacity to manage compliance. If something like the White House executive order ends up being enforced in other countries too, the winners, arguably, will be a handful of tech companies while it will become much harder to access open-source technology.

I'm not very familiar with the India approach to regulation. But my sense is that India is taking a very light touch. And I think, Indias approach is good. In fact, most Asian nations have been regulating AI with a much lighter touch, which has been a good move.

I think regulating AI applications is a great idea. Deep fakes are problematic, and certainly one of the most disgusting things, has been generation of non-consensual, pornographic images. I'm glad regulators are trying to regulate those horrible applications.

Yet, having more intelligence in the world via human intelligence or artificial intelligence is a good thing. While intelligence can be used for nefarious purposes too, one of the reasons that humanity has advanced over the centuries is because we all collectively got smarter and better educated and have more knowledge. Slowing that down (with regulation) seems like a very foolish thing for governments to do.

Gen AI is the fastest growing course of 2023 with about 74,000 enrollments in just the first week. That probably won't surprise you since there's very high interest in learning Gen AI skills and technical skills. We are seeing a lot of traction on developer-oriented content, as well as by a non-technical audience because GenAI is so disruptive; it is changing the nature of work for a lot of professions. I hope that Gen AI for everyone and other courses on Coursera can help people use the technology to become developers that build on top of the technology and create a layer that is valued by the builders (of the platform).

Fear of job losses is a very emotional subject. I wish AI was even more powerful than it is. But realistically, it can automate only a fraction of tasks done by the economy. There's still so much that Gen AI cannot do. Some estimates peg that GenAI can automate may be 15% of the tasks done in the US economy; at the higher end, maybe approaching 50%. But 15 or 50%, these are huge numbers as a percentage of the economy. We should embrace it (Gen AI) and figure out the use cases. In terms of how to think about one's own career, I hope that the 'Gen AI for everyone' course will help with that.

Any company that does a lot of knowledge work should embrace it (Generative AI), and do so relatively quickly. Even industries that don't do knowledge work seem to be becoming more data oriented. Even things like manufacturing and natural resource extraction, which traditionally did not seem like knowledge-driven, are becoming more data- and AI-oriented, and it turns out that the cost of experimenting with, and developing with Gen AI is lower than earlier with AI.

A good recipe for senior executives is to take a look at the jobs being done by people in the company, break the jobs down into tasks, and see which tasks are amenable to automation. And given the low development costs, definitely every large enterprise should look at it (Gen AI). Even medium enterprises may have the resources to develop genuine applications, and so do small enterprises.

Gen AI is absolutely safe enough for many applications, but not for all applications. Part of the job of not just C-Suite executives, but of companies broadly, is to identify and take advantage of Gen AI within those applications. Would I have Gen AI tell me how to take a pill as a drug for a specific ailment? Probably not. But Gen AI can be used for a lot of applications including using it as a thought partner to help with brainstorming, or improving your writing or, helping to summarise information or process information. There are a lot of use cases in corporations too where it can boost productivity significantly.

Think about the use of a CPU (central processing unit) that spans different sizes for different applications. Today, we have very powerful data centre servers and GPUs (graphics processing units), and yet we have a CPU running on my laptop, a less powerful one running on my phone, an even less powerful one running my watch, and an even less powerful one controlling the sunlight in my car.

Likewise, a really advanced model like GPT (generative pre-trained transformer) should be used for some very complex tasks. But if your goal is to summarise conversations in the contact centre, or maybe check grammar, or for your writing, then maybe it does not need to know much about history, philosophy, or astronomy, implying that a smaller model would work just fine.

Looking at the future. there will be more work on edge (on devices) AI, where more people will run smaller models that can protect one's privacy too.

There are models where you can probably understand the code better and where, perhaps, the transparency is higher. But even for open source, it's pretty hard to figure out why a specific algorithm gave a certain answer.

While it is true that there have been some companies that are a thin wrapper on some APIs (application programming interfaces), there are actually a lot of opportunities to build really deep tech company atop new Gen AI capabilities. Take a different analogy--I don't think that Uber is a thin wrapper on top of iOS but you can do a lot of work on top of them. AI Fund focuses on venture scale businesses, so we tend to go after businesses with a significant market need, and we can build technology to address. And we are building things that involve deep tech that are not that easy to eradicate.

But I would tell an entrepreneur-- just go and experiment. And frankly, if you build a thin wrapper that works, great. Use those learnings to maybe make that wrapper deeper or go do something else that is even harder to build. This is a time of expansion, creativity, and innovation but innovators must be responsible. There are so many opportunities to build things that were not possible before the new tools were available.

AI is a very transformative technology that benefits every individual and every business. That's why I was excited to teach 'GenAI for Everyone'. Because we have to help every individual and every business navigate this. I hope that people will jump on, learn about technology, and use it to benefit themselves in the communities around them and the world.

See the rest here:

India's approach to regulating AI is good, says Andrew Ng | Mint - Mint

Read More..

CNBC Daily Open: Back to square one? – CNBC

Traders work on the floor of the New York Stock Exchange (NYSE) on November 15, 2023 in New York City.

Spencer Platt | Getty Images News | Getty Images

This report is from today's CNBC Daily Open, our new, international markets newsletter. CNBC Daily Open brings investors up to speed on everything they need to know, no matter where they are. Like what you see? You can subscribehere.

Last burst before a breakU.S. stocks ended Wednesday in positive territory before going on break today for Thanksgiving. Treasurys briefly fell to a two-month low before inching up again. Asia-Pacific markets traded mixed Thursday. Australia's S&P/ASX 200 fell about 0.6% as flash estimates showed the country's business activity contracting at its fastest pace in 27 months. Meanwhile, Japan's Nikkei 225 added 0.3%.

Altman's backSam Altman has returned as the CEO of OpenAI, less than a week after he was ousted by the company's previous board. There's also a board reshuffle, with Bret Taylor, former co-CEO of Salesforce and Larry Summers, former U.S. Treasury secretary, joining. Separately, OpenAI researchers reportedly warned the board of an AI breakthrough ahead of Altman's ouster.

Binance outflowBinance has seen outflows of more than $1 billion in the past 24 hours and that figure doesn't even include bitcoin according to data from blockchain analysis firm Nansen. Still, more than $65 billion worth of assets remain on Binance, Nansen noted, and there hasn't been a "mass exodus" of funds. The withdrawals come after founder Changpeng Zhao pleaded guilty to criminal charges Tuesday.

Smartphone sales reboundGlobal smartphone sales rose 5% year on year in October, reversing a downward trend that lasted 27 months, according to data from Counterpoint Research. "The growth has been led by emerging markets with a continuous recovery in Middle East and Africa, Huawei's comeback in China and onset of festive season in India," the research firm said.

[PRO] New AI trendGenerative artificial intelligence relies heavily on computing power, typically hosted on cloud servers. Some companies are trying to change the way consumers use AI, which would save costs, reduce latency and help those companies' shares outperform in 2024 and 2025, according to Morgan Stanley.

A slow day in U.S. markets as investors turned their thoughts to turkey rather than Treasurys.

To be sure, it was exciting in Treasury land for a while. The 10-year Treasury yield fell to 4.369% during the day, its lowest since Sept. 20. But it rebounded to 4.41%, essentially unchanged from U.S. trading on Tuesday.

The same trajectory of sudden intensity followed by a reversion to the norm seems to have played out across various events this week.

Sam Altman's back as OpenAI's CEO less than a week after his ouster. Oil prices clawed back most of their losses after they slumped around 5% Wednesday on the news that the Organization of Petroleum Exporting Countries delayed their meeting by four days. Jack Ma's holding off his previously announced plans to sell Alibaba shares after they tumbled around 9% last week.

The dust seems to have settled for now but that doesn't diminish the volatility of those situations. Oil prices could shoot up again following the OPEC+ meeting. New developments could crop up at OpenAI.

Still, investors took a breather yesterday. Trading volume was muted: The SPDR S&P 500, which tracks the broad-based index, traded 59.3 million shares, below its 30-day average of 84.6 million.

Major indexes managed to end the day in positive territory. The S&P 500 added 0.41%, the Dow Jones Industrial Average gained 0.53% and the tech-heavy Nasdaq Composite rose 0.46%, despite Nvidia dropping 2.46% after reporting earnings.

U.S. markets close for Thanksgiving on Thursday, and return for a shortened session the next day. Investors might be thankful for that, too, after a hectic week in markets and business.

Excerpt from:
CNBC Daily Open: Back to square one? - CNBC

Read More..

Scots victims could be illegally compromised by 33m criminal justice IT system – Daily Record

A 33 million criminal justice IT system could illegally compromise the personal data of thousands of Scots victims.

Watchdogs have raised serious concerns about trials of the Digital Evidence Sharing Capability (DESC) service by Police Scotland and said the Crown Office could already have broken the law.

The system bought by the Scottish Government from US firm Axon allows witness statements, body-cam footage, fingerprints and other details to be uploaded and shared with other agencies. But the Sunday Mail can reveal the Scottish Police Authority (SPA) and biometrics commissioner have given formal warnings over its legality and security.

They have raised fears it could lead to class action lawsuits, hacking and the prospect of the US government snooping on citizens.

Opposition politicians have demanded a halt to the roll-out until concerns are answered.

Scottish Tory shadow justice secretary Russell Findlay said: SNP ministers cannot press ahead with this system without seeking categorical assurances about the security of the highly sensitive and personal data of crime victims and witnesses. It appears these concerns have already been flagged within Scottish policing, so it would be grossly irresponsible, and financially improper, to proceed without ensuring they are addressed.

Scottish Lib Dem justice spokesperson Liam McArthur said: These documents raise real questions about why Police Scotland has pressed ahead with this scheme while the legal status is still up in the air. Its an approach that opens up the risk of legal challenges bogging down the service in litigation for years.

Concern revolves around files being held by a US firms cloud servers. This could leave Scottish authorities unable to comply with UK data protection laws.

Get the latest news sent straight to your messages by joining our WhatsApp community today.

You'll receive daily updates on breaking news as well as the top headlines across Scotland.

No one will be able to see who is signed up and no one can send messages except the Daily Record team.

All you have to do is click here if you're on mobile, select 'Join Community' and you're in!

If you're on a desktop, simply scan the QR code above with your phone and click 'Join Community'.

We also treat our community members to special offers, promotions, and adverts from us and our partners. If you dont like our community, you can check out any time you like.

To leave our community click on the name at the top of your screen and choose 'exit group'.

If youre curious, you can read our Privacy Notice.

Axons system is being hosted on Microsoft Azure. But in an impact assessment drafted by the SPA and seen by the Sunday Mail, the watchdog warned transfers to overseas cloud providers are likely to be illegal. It added its concerns relate to the provider, a wholly owned US company, and its sub-processor, Microsoft Azure.

The document said US law allows its attorney general and intelligence services director to jointly authorise targeted surveillance of people outside the US, as long as they are not a US citizen.

US law also allows its government to access any data, stored anywhere by US firms in the cloud. While the data protection impact assessment said the risk of US government access via the Cloud Act was unlikely, it added the fallout would be cataclysmic.

Scottish biometrics commissioner Brian Plastow also raised concerns. He served Police Scotland with a formal notice in April requiring it to demonstrate its use of the system was compliant with the Data Protection Act.

Police Scotland confirmed in July it had uploaded significant volumes of images to DESC during this pilot, while insisting appropriate encryption was in place. But Plastow said this did not ameliorate specific concerns.

He is now reviewing whether Police Scotland is complying with a data code of conduct.

The SPA said: There are often associated risks when introducing new digital solutions and it is satisfied Police Scotland is taking all necessary steps to address and mitigate these before rollout.

Police Scotland said it was continuing to identify, assess and mitigate any risks relating to data sovereignty. The Scottish Government said: We take the privacy of citizens data very seriously.

Axon said it has established and continues to enhance data protection measures to support customers, including our contract with the Scottish Government.

Don't miss the latest news from around Scotland and beyond - sign up to our daily newsletter here.

See the article here:
Scots victims could be illegally compromised by 33m criminal justice IT system - Daily Record

Read More..

Baidu reveals expectations-beating earnings and touts its new ChatGPT-like AI models, amid leadership chaos at U.S. competitor OpenAI – Fortune

Baidu shares jumped almost 4.5% during Wednesday trading in Hong Kong following expectations-beating revenue from the Chinese tech giant. Baidu is trying to solidify an early lead in the race to win Chinas AI market, starting with the launch of its ChatGPT-like ERNIE Bot earlier this year.

Baidu generated revenue of $4.7 billion for the three months ending Sept. 30, a 6% year-on-year increase. The company also earned $916 million in net income, compared to a $20.6 million loss for the same quarter last year.

Our AI-centric business and product strategy should set the stage for sustained multiyear revenue and profit expansion within our ERNIE and ERNIE Bot ecosystem, CEO Robin Li said in a statement on Tuesday.

Baidu and Li, also the companys founder, hope AI will revive the tech companys fortunes, after the company lost ground to competitors like Tencent and Alibaba. The company is primarily known for its search engine, but is now shifting to new sectors like automated driving and generative AI.

Baidu launched ERNIE earlier this year, though observers were underwhelmed by the presentation compared to its non-Chinese peers like Google and Microsoft. Yet the Chinese company has continued to update the model and its chatbot, releasing ERNIE 4.0 in October.

The tech company also shared details on its robotaxi service, named Apollo Go, which operates in major cities like Wuhan, Shenzhen, and Beijing. The autonomous ride-hailing service carried 821,000 passengers last quarter, a 73% increase from a year ago.

Baidu is part of a growing rush in Chinas tech sector to launch generative AI products, and arguably leading the way: The company is the only Chinese firm featured in Fortunes inaugural AI Innovators list, released on Tuesday, which highlights 50 companies at the forefront of AI.

The companys ERNIE bot is perhaps Chinas closest equivalent to OpenAIs ChatGPT, currently banned in China. The bot outperforms ChatGPT in several Chinese-language tasks, Baidu says.

Yet Baidus Big Tech peers are also barreling into the space. Alibaba, Tencent, and JD.com have all announced their own large language models. (JD.com CEO Sandy Xu Ran is joining Baidus board as an independent director, the company announced Tuesday.) Several smaller AI companies and startups are also developing their own models: There are now over 130 large language models being developed in China today, according to one estimate.

Yet Chinas AI companies need to work within the limits of what Beijing allows. According to rules approved in July, Chinese developers must ensure that their AI services align with core socialist values and national security. Yet they also highlight the importance of innovation, and revisions weakened provisions on how to penalize companies that break the rules.

Developers in China face another threat: U.S. rules limiting the sale of advanced AI chips from firms like Nvidia to Chinese companies. Last week, Alibaba shelved its plan to spin off its cloud computing division as an independent company, blaming uncertainty from U.S. export controls.

On Tuesday, Li warned that these restrictions could force the consolidation of large language models in China. Baidu has enough AI chips stockpiled in the near term, he said.

Regulation and access to chips are the primary risks faced by Chinas AI sector, seemingly more real-world concerns compared to the recent worries in the U.S. about safety and more existential threats from the new technology. OpenAI, the developer behind ChatGPT, fired its CEO Sam Altman on Friday, reportedly due to concerns that he was moving too quickly on releasing the organizations products. (Altman returned as OpenAIs CEO on Wednesday morning, ending days of negotiations to bring him back to the organization.)

The debate around the existential risks around [artificial general intelligence] has not been as much of a priority within the Chinese AI community, which has focused more on developing solid use cases for enterprise deployments of generative AI, Paul Triolo, an associate partner for China and technology policy lead at the advisory firm Albright Stonebridge, told Fortune on Monday.

Conversations on AI risk will be very much a government-driven thing in China. No CEO is going to be forced out because of disputes over the lack of guardrails to tackle the existential risks of AGI, he added.

Baidu established an ethics committee in October to guide the practices of technology professionals, the company said in its earnings statement.

Additional reporting by Nicholas Gordon

Continue reading here:

Baidu reveals expectations-beating earnings and touts its new ChatGPT-like AI models, amid leadership chaos at U.S. competitor OpenAI - Fortune

Read More..