Page 3«..2345..1020..»

How to Deploy the Nextcloud Cloud Server on AlmaLinux – The New Stack

Cloud services are all over the place. For most people, the usual options (such as Google, iCloud, etc.) are fine. For others who demand more security and control, there are additional options, such as Nextcloud.

Nextcloud is an open source platform that includes all the features you’ve grown accustomed to (such as files, editors, chat, version control and much more) and can be deployed to hardware on your network. Because of that, you don’t have to worry about third parties having access to your data. That’s a win for any security/privacy-minded individuals or companies.

I want to show you how to deploy Nextcloud to the open source AlmaLinux operating system. Unlike deploying to Ubuntu Server, there are a few more steps required, which can often trip people up.

Let me help you avoid those pitfalls.

Ready?

The only things you’ll need for this basic installment are a running instance of AlmaLinux 9 and a user with sudo privileges. Of course, if you want to point a domain name to the instance, you’ll need an FQDN and secure it with SSL. Since I’m only deploying this to an internal network, I’m not going to worry about those things at the moment.

With those things at the ready, let’s install them.

There are a few dependencies we have to take care of.

The first thing we’ll do is install the Apache web server. Log into AlmaLinux and issue the command:

sudo dnf install httpd -y

sudo systemctl enable --now httpd

sudo systemctl enable--now httpd

Next, we’ll install the necessary PHP release. First, enable the EPEL release with:

sudo dnf install epel-release -y

sudo dnf install epel-release-y

sudo dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm -y

sudo dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm-y

sudo dnf module reset php -y

sudo dnf module reset php-y

sudo dnf module enable php:remi-8.1 -y

sudo dnf module enable php:remi-8.1-y

We now need to configure PHP. Open the configuration file with:

Where YOUR_TIMEZONE is the timezone in which your server is located.

You can make a short-shrift of locating the above entries by using the nano search tool (which is called up with the Ctrl+w keyboard shortcut).

Save and close the file.

Next, open the PHP OPCache configuration file with:

sudo nano /etc/php.d/10-opcache.ini

sudo nano/etc/php.d/10-opcache.ini

Save and close the file.

Restart Apache and PHP with the following:

The next step is the installation of the MariaDB database. To do that, we must create a repository file with the command:

sudo nano /etc/yum.repos.d/MariaDB.repo

sudo nano/etc/yum.repos.d/MariaDB.repo

Save and close the file.

Install MariaDB with:

sudo dnf install MariaDB-server MariaDB-client -y

sudo dnf install MariaDB-server MariaDB-client-y

sudo systemctl enable --now mariadb

sudo systemctl enable--now mariadb

sudo mariadb-secure-installation

sudo mariadb-secure-installation

With the MariaDB installed, it’s time to create our database. Access the MariaDB console with:

CREATE DATABASE netxcloud_db;

CREATE DATABASE netxcloud_db;

CREATE USER nextuser@localhost IDENTIFIED BY 'PASSWORD';

CREATE USER nextuser@localhost IDENTIFIED BY 'PASSWORD';

Grant the required permissions with:

GRANT ALL PRIVILEGES ON netxcloud_db.* TO nextuser@localhost;

GRANT ALL PRIVILEGES ON netxcloud_db.* TO nextuser@localhost;

Before you download Nextcloud, you’ll need to install a few more bits with:

sudo dnf install unzip wget setroubleshoot-server setools-console -y

sudo dnf install unzip wget setroubleshoot-server setools-console-y

sudo wget https://download.nextcloud.com/server/releases/latest.zip

sudo wget https://download.nextcloud.com/server/releases/latest.zip

sudo chown -R apache:apache /var/www/nextcloud

sudo chown-R apache:apache/var/www/nextcloud

Unless we configure SELinux properly, Nextcloud will not function. The first thing to do is to properly label all of the Nextcloud files and folders with the following commands:

Next, you must allow the webserver to connect to the network with the following commands:

We have to create a new policy module to ensure PHP-FPM can connect to the MariaDB socket. First, create a new file with the command:

Save and close the file.

Convert the file to an SELinux policy module with the command:

sudo checkmodule -M -m -o my-phpfpm.mod my-phpfpm.te

sudo checkmodule-M-m-o my-phpfpm.mod my-phpfpm.te

sudo semodule_package -o my-phpfpm.pp -m my-phpfpm.mod

sudo semodule_package-o my-phpfpm.pp-m my-phpfpm.mod

sudo semodule -i my-phpfpm.pp

sudo semodule-i my-phpfpm.pp

We now have to create a virtual host file with the command:

sudo nano /etc/httpd/conf.d/nextcloud.conf

sudo nano/etc/httpd/conf.d/nextcloud.conf

Save and close the file.

Restart Apache with:

sudo systemctl restart httpd

sudo systemctl restart httpd

You should now be able to point a web browser to http://SERVER (where SERVER is the IP address of the hosting server) and be greeted by the Nextcloud web-based installer, where you can create an admin user and finish up the process with a few clicks.

If you find there’s an error connecting to the database and writing to the data directory, temporarily disable SELinux (until the next reboot) with:

After the installation completes, reboot the machine and SELinux is back to keeping tabs on the system and Nextcloud is up and running.

And that, my friends, is how you deploy Nextcloud to AlmaLinux.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

See the rest here:
How to Deploy the Nextcloud Cloud Server on AlmaLinux - The New Stack

Read More..

This breakthrough tech could solve Microsoft’s AI power consumption woes and is 1,000x more energy-efficient – Windows Central

What you need to know

Generative AI is a resource-hungry form of technology. While it's been leveraged to achieve impressive feats across medicine, education, computing, and more, its power demands are alarmingly high. According to a recent report, Microsoft and Google's electricity consumption surpasses the power usage of over 100 countries.

The high power demand is holding the tech from realizing its full potential. Even billionaire Elon Musk says we might be on the precipice of the most significant technological breakthrough with AI, but there won't be enough electricity to power its advances by 2025.

OpenAI CEO Sam Altman has shown interest in exploring nuclear fusion as an alternative power source for the company's AI advances. On the other hand, Microsoft has partnered with Helion to start generating nuclear energy for its AI efforts by 2028.

In a paper published in Nature, there might be a silver lining that could help Microsoft facilitate its AI efforts. Researchers have developed a new prototype chip dubbed computational random-access memory (CRAM) that could scale down AI's power-hungry demands by over 1,000 times, translating to 2,500x energy savings in one of the simulations shared.

READ MORE: Microsoft and Google's electricity consumption surpasses the power usage of over 100 countries

As you may know, traditional AI processes transfer data between logic and memory, which heavily contributes to their high power consumption. However, the CRAM approach keeps data within the memory, canceling AI's high demand for power.

With the rapid progression of AI, tools like ChatGPT and Microsoft Copilot would've consumed enough electricity to power a small country for a whole year by 2027. However, the researchers behind the CRAM model believe it could achieve energy savings of up to 2,500 times compared to traditional methods.

All the latest news, reviews, and guides for Windows and Xbox diehards.

The CRAM model isn't a new phenomenon. According to Professor Jian-Ping Wang, the senior author of the paper:

"Our initial concept to use memory cells directly for computing 20 years ago was considered crazy."

CRAM leverages the spin of electrons to store data, compared to traditional methods that use electrical charges. It also offers high speeds, low power consumption, and is environmentally friendly.

Ulya Karpuzcu, a co-author of the paper, further stated:

"As an extremely energy-efficient digital-based in-memory computing substrate, CRAM is very flexible in that computation can be performed in any location in the memory array. Accordingly, we can reconfigure CRAM to best match the performance needs of a diverse set of AI algorithms."

While the researchers have yet to determine how far they can push this model regarding scalability, it shows great promise. It could solve AI's most significant deterrent high power consumption.

Go here to see the original:
This breakthrough tech could solve Microsoft's AI power consumption woes and is 1,000x more energy-efficient - Windows Central

Read More..

OpenAI Director Says Artificial General Intelligence May Be 5 Years Out – PYMNTS.com

How long will it take for artificial intelligence to be as smart as human beings?

According to OpenAI board member Adam DAngelo, that milestone is likely to happen within five to 15 years, Seeking Alpha reported Monday (July 29).

DAngelo, CEO and co-founder of Quora, made that prediction during an event last week, the report added. He said the advent of artificial general intelligence (AGI) will be a very, very important change in the world when we get there.

His comments follow reports from earlier this month that OpenAI had developed a way to track its progress toward building AGI, with the company sharing a new five-level classification system with employees.

The company believes it is now at Level 1, where AI that can interact in a conversational way with people, and is approaching Level 2, or systems that can solve problems as well as a human with a doctorate-level education.

The next levels involve AI systems that can spend several days acting on a users behalf, develop innovations, and finally at level five do the work of an organization.

OpenAI CEO Sam Altman and Chief Technology Officer (CTO) Mira Murati said last fall that AGI will be reached within the next 10 years.

Were big believers that you give people better tools, and they do things that astonish you, Altman said. And I think AGI will be the best tool humanity has yet created.

As PYMNTS wrote recently, the reports of these efforts have sparked buzz in the business world of the possibility of AI-powered commerce that could rewrite the rules of global trade, assuming the technology can live up to the hype.

OpenAIs pursuit of human-level reasoning isnt just a technological marvel; its a narrative of pushing boundaries and sparking new possibilities in every sector, Ghazenfer Mansoor, founder and CEO of Technology Rivers, told PYMNTS. In business, AI can dramatically change how supply chains are managed, forecast market trends with great accuracy, and make customer experiences very personal on a big scale.

Earlier this year, OpenAI staffers reportedly showed demos of AI models that could answer tricky science and math questions, with one model scoring more than 90% on a championship math dataset. The company also recently showcased a project with new human-like reasoning skills at an internal meeting.

The way such an algorithm can work is by creating multiple options, following a tree of possibilities, and then reasoning about the outcome and choosing the best path, SmythOS CTO Alexander De Ridder told PYMNTS. This is similar to how chess players think different steps ahead before choosing to move their piece.

He suggested that OpenAIs innovation likely involves an algorithmic breakthrough in how to do this efficiently and scalably, potentially combining autonomous web research and tool usage to arrive at a reasoning breakthrough.

See original here:

OpenAI Director Says Artificial General Intelligence May Be 5 Years Out - PYMNTS.com

Read More..

To understand the perils of AI, look to a Czech novelfrom 1936 – The Economist

When historians in future centuries compile the complete annals of humankind, their output will be divided into two tomes. The first will cover the hundreds of thousands of years during which humans have been earths highest form of intelligence. It will recount how souped-up apes came up with stone tools, writing, sliced bread, nuclear weapons, space travel and the internetand the various ways they found to misuse them. The second tome will describe how humans coped with a form of intelligence higher than their own. How did our sort fare once we were outsmarted? Rather thrillingly, the opening pages of that second volume may be about to be written. Depending on whom you ask, artificial general intelligencesystems capable of matching humans, and then leaving them in the cognitive dustare either months, years or a decade or two away. Predictions of how this might pan out range from everyone enjoying a life of leisure to the extinction of the human race at the hands of paperclip-twisting robots.

Continued here:

To understand the perils of AI, look to a Czech novelfrom 1936 - The Economist

Read More..

The exponential expenses of AI development – AI News

Tech giants like Microsoft, Alphabet, and Meta are riding high on a wave of revenue from AI-driven cloud services, yet simultaneously drowning in the substantial costs of pushing AIs boundaries. Recent financial reports paint a picture of a double-edged sword: on one side, impressive gains; on the other, staggering expenses.

This dichotomy has ledBloombergto aptly dub AI development a huge money pit, highlighting the complex economic reality behind todays AI revolution. At the heart of this financial problem lies a relentless push for bigger, more sophisticated AI models. The quest for artificial general intelligence (AGI) has led companies to develop increasingly complex systems, exemplified by large language models like GPT-4. These models require vast computational power, driving up hardware costs to unprecedented levels.

To top it off, the demand for specialised AI chips, mainly graphics processing units (GPUs), has skyrocketed. Nvidia, the leading manufacturer in this space, has seen its market value soar as tech companies scramble to secure these essential components. Its H100 graphics chip, the gold standard for training AI models, has sold for an estimated $30,000 with some resellers offering them for multiple times that amount.

The global chip shortage has only exacerbated this issue, with some firms waiting months to acquire the necessary hardware. Meta Chief Executive Officer Zuckerbergpreviously saidthat his company planned to acquire 350,000 H100 chips by the end of this year to support its AI research efforts. Even if he gets a bulk-buying discount, that quickly adds to billions of dollars.

On the other hand, the push for more advanced AI has also sparked an arms race in chip design. Companies like Google and Amazon invest heavily in developing their AI-specific processors, aiming to gain a competitive edge and reduce reliance on third-party suppliers. This trend towards custom silicon adds another layer of complexity and cost to the AI development process.

But the hardware challenge extends beyond just procuring chips. The scale of modern AI models necessitates massive data centres, which come with their technological hurdles. These facilities must be designed to handle extreme computational loads while managing heat dissipation and energy consumption efficiently. As models grow larger, so do the power requirements, significantly increasing operational costs and environmental impact.

In apodcast interviewin early April, Dario Amodei, the chief executive officer of OpenAI-rival Anthropic, said the current crop of AI models on the market cost around $100 million to train. The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion, he said. And then I think in 2025 and 2026, well get more towards $5 or $10 billion.

Then, there is data, the lifeblood of AI systems, presenting its own technological challenges. The need for vast, high-quality datasets has led companies to invest heavily in data collection, cleaning, and annotation technologies. Some firms are developing sophisticated synthetic data generation tools to supplement real-world data, further driving up research and development costs.

The rapid pace of AI innovation also means that infrastructure and tools quickly become obsolete. Companies must continuously upgrade their systems and retrain their models to stay competitive, creating a constant cycle of investment and obsolescence.

On April 25, Microsoft said it spent $14 billion on capital expenditures in the most recent quarter and expects those costs to increase materially, driven partly by AI infrastructure investments. That was a 79% increase from the year-earlier quarter. Alphabet said it spent $12 billion during the quarter, a 91% increase from a year earlier, and expects the rest of the year to be at or above that level as it focuses on AI opportunities, the article by Bloomberg reads.

Bloomberg also noted that Meta, meanwhile, raised its estimates for investments for the year and now believes capital expenditures will be $35 billion to $40 billion, which would be a 42% increase at the high end of the range. It cited aggressive investment in AI research and product development,Bloombergwrote.

Interestingly, Bloombergs article also points out that despite these enormous costs, tech giants are proving that AI can be a real revenue driver. Microsoft and Alphabet reported significant growth in their cloud businesses, mainly attributed to increased demand for AI services. This suggests that while the initial investment in AI technology is staggering, the potential returns are compelling enough to justify the expense.

However, the high costs of AI development raise concerns about market concentration. As noted in the article, the expenses associated with cutting-edge AI research may limit innovation to a handful of well-funded companies, potentially stifling competition and diversity in the field. Looking ahead, the industry is focusing on developing more efficient AI technologies to address these cost challenges.

Research into techniques like few-shot learning, transfer learning, and more energy-efficient model architectures aims to reduce the computational resources required for AI development and deployment. Moreover, the push towards edge AI running AI models on local devices rather than in the cloud could help distribute computational loads and reduce the strain on centralised data centres.

This shift, however, requires its own set of technological innovations in chip design and software optimisation. Overall, it is clear that the future of AI will be shaped not just by breakthroughs in algorithms and model design but also by our ability to overcome the immense technological and financial hurdles that come with scaling AI systems. Companies that can navigate these challenges effectively will likely emerge as the leaders in the next phase of the AI revolution.

(Image by Igor Omilaev)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence

Follow this link:

The exponential expenses of AI development - AI News

Read More..

5 Visionary Leaders Driving Artificial Intelligence in 2024 – CEO Insights Asia

Keerthana Kantharaj, Correspondent

The fast transition to an artificial intelligence-wired (AI) society is thanks to decades of innovation, experimentation and research. At the heart of this progress are some unique visionary leaders in AI. We are talking about a bunch of AI technology leaders at the helm of the transformation stories of many industries. These AI industry leaders are also the ones driving the ethical use of technology in the common mans world. Visionaries driving AI continue setting new standards in the fields of computational intelligence, machine learning and neural networks. Here are top AI visionaries in 2024 spotlighting how their achievements and innovations have paved the way for a technological revolution.

Mustafa Suleyman, Co-Founder, DeepMind & Inflection AI: A Torchbearer in Ethical AI Research

Britain-based computer scientist and entrepreneur Mustafa Suleyman is one of the worlds most well-known names and is at the forefront of the AI boom. During his time as DeepMinds Co-Founder, he left behind a huge trail of ground-breaking algorithms and systems that challenge general intelligence and reinforcement learning. Along with Demis Hassabis and Shane Legg, Suleyman made DeepMind famous for its work in deep learning, especially in the fields of speech and picture recognition. Mustafa has guided teams to implement path-breaking AI systems through Google products and in other industries to showcase the practical application of AI research.

Mustafa has greatly been instrumental in changing the way AI was incorporated into products and ensured the address of moral concerns by lending his expertise to Google's AI policy and product development.

He left DeepMind to focus more on his mission to enhance human-computer connection through his co-founded venture, Inflection AI, in 2022, a company that specializes in "natural language interfaces" and generative AI. Mustafa still plays an active role by continuing to share his experience through writings, talks and board membership in prestigious institutions like The Economist. Today, he stands for the promotion of ethical AI research and its potential advantages for society.

Yoshua Bengio, Co-Founder, Mila: The Coach of Upcoming AI Scientists

One of the fathers of deep learning is the Canadian computer scientist Yoshua Bengio. Yoshua has propelled the advancements in the creation of learning algorithms and neural networks. His research on convolutional neural networks, restricted Boltzmann machines, and deep belief networks are the reasons for the advancement of deep learning and the creation of many of the widely used AI systems today. Many of his write ups, co-authored books and research articles have been a key guide to help shape the conversation and trajectory of deep learning research. Today, he shares his technology wisdom with the upcoming generation of AI scientists.

He co-founded Mila, the Montreal Institute for Learning Algorithms, which is a research institution showcasing his dedication to advancing deep learning research and fostering collaboration. He also shares an interest in voicing out AIs development in a responsible manner, including solving concerns about it. Together with Yann LeCun and Geoffrey Hinton, Yoshua was recognized for the coveted Turing Award or the "Nobel Prize" of computers, in 2018 for groundbreaking work in deep learning.

Yann LeCun, Chief AI Scientist of Meta: A Legendary Mentor for Next Generation AI Researchers

When learning about one of the fathers of AI, it is worth learning about the godfather of AI, Yann LeCun. Yann is renowned for his major contributions to the field of deep learning, specifically in using neural networks to analyze and interpret complicated data. His research plays a heavy part in helping lead the AI revolution by enabling machines to learn and analyze data in ways that were previously unthinkable. One of his most widely known AI works is the creation of convolutional neural networks (CNNs), largely serving applicative value in speech and picture recognition. Besides that, Yann also invented the use of CNNs for image recognition in the 1990s, which sparked advances in the field and allowed computers to accurately identify and categorize images.

In recognition of their ground-breaking work in deep learning, LeCun, Bengio, and Hinton shared the prestigious Turing Award in 2018. Currently, he operates in Meta as the Chief AI Scientist, leading a team and directing the company's AI research and development. One of the companies he founded is the Computational and Biological Learning Lab at New York University, a research group actively exploring frontiers in AI and machine learning. He has also authored or co-authored a number of significant research papers and actively mentors the next generation of AI researchers, sharing his knowledge and expertise.

Dr. Andrew Ng, Co-Founder, DeepLearning.AI & Coursera: Leader in the Ethical Development of AI

Renowned computer scientist and entrepreneur Dr. Andrew Ng is driving the ethical development of AI systems. Dr. Andrew has authored and co-authored over 200 academic articles on robotics, machine learning, and related topics. Leading Google Brain, a research team focused on creating deep learning algorithms; he also oversaw the expansion of Baidu's AI group into a large staff. Dr. Andrew is an ardent supporter of granting everyone access to AI. He created DeepLearning.AI and co-founded Coursera, two of the top online learning platforms, to provide millions of students across the world with free and open-source education. In addition, Ng also started Landing AI, a business that creates SaaS solutions driven by AI, and the AI Fund, which aims to support innovation and invest in potential AI startups.

Ng's impact goes beyond his technical prowess. He was included in the lists of the 100 Most Influential People by Time magazine in 2013, the Most Creative People by Fast Company in 2014, and the Time 100 Most Influential People in AI in 2023. Additionally, he has advised government organizations on matters pertaining to AI and national security and has collaborated extensively with professionals in the field to create moral and responsible AI practices. His work has influenced corporate and governmental policies around artificial intelligence and has helped to create more ethical and responsible methods for developing and using AI. His leadership in the field of artificial intelligence is cemented by his multifarious accomplishments in research, education, and business.

Sam Altman, Co-Founder, OpenAI: A Young Revolutionary Influencing AIs Ethical Development

Sam Altman is a well-known figure in the field of generative AI and co-founder of the popular AI research and deployment group OpenAI. With the introduction of ChatGPT in November of last year, Altman has become a beacon of the AI revolution, and his work on generative AI has been crucial in influencing the direction of AI innovation going forward. A massive tech frenzy was sparked by the generative AI chatbot, which prompted Google and Meta to invest enormous sums of money in the research and development of artificial intelligence.

However, Altman's contribution to igniting the AI revolution goes far beyond developing ChatGPT. Altman has been at the vanguard of numerous innovative AI projects and has been a significant figure in the internet industry for more than ten years. He was the previous president of Y Combinator, a startup accelerator that has assisted in the development of firms like Reddit, Dropbox, and Airbnb. Several AI businesses, notably DeepMind, which Google acquired in 2015 for an alleged $500 million, had Altman as a major investor. In addition, Altman has spoken out against the possible dangers of artificial intelligence and in favor of developing AI in an ethical manner. Additionally, he has supported the use of AI to address some of the most important issues facing the globe, like healthcare and climate change.

Here is the original post:

5 Visionary Leaders Driving Artificial Intelligence in 2024 - CEO Insights Asia

Read More..

Apple agrees to adopt AI safeguards following in footsteps of tech rivals – New York Post

Apple on Friday said it will voluntarily adopt safeguards for artificial intelligence joining other tech giants including OpenAI, Amazon, Google parent Alphabet and Meta in complying with Biden administration guidelines aimed at minimizing national security risk.

In July 2023, the Biden administration announced that it had secured voluntary commitments from seven leading AI companies who pledged to help move toward safe, secure, and transparent development of the technology.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI were the first seven companies to sign on to the administrations initiative.

The companies are asked to transparently share results of tests that measure compliance with security and anti-discrimination regulations.

Apple joined their tech rivals after announcing last month that it would be incorporating AI features into its signature products including iPhone, iPad and Mac.

The Cupertino, Calif.-based colossus announced a fresh set of free software updates dubbed Apple Intelligence in an effort to catch up with other Silicon Valley rivals, such as Microsoft and Google, that have moved ahead of the pack by leaps and bounds in the AI arms race.

At its annual World Wide Developers Conference last month, Apple said it would rely on OpenAIs ChatGPT to make its virtual assistant Siri smarter and more helpful.

Siris optional gateway to ChatGPT will be free to all iPhone users and made available on other Apple products once the option is baked into the next generation of Apples operating systems.

ChatGPT subscribers are supposed to be able to easily sync their existing accounts when using the iPhone, and should get more advanced features than free users would.

Apples full suite of upcoming features will only work on more recent models of the iPhone, iPad and Mac because the devices require advanced processors.

For instance, consumers will need last years iPhone 15 Pro or buy the next model coming out later this year to take full advantage of Apples AI package, although all the tools will work on Macs dating back to 2020 after that computers next operating system is installed.

The rapid advancement of AI technology has prompted debate among tech observers over possible risks posed to the economy, national security and even the survival of the human race.

Last month, a group of AI whistleblowers claimed that Google and OpenAI were endangering humanity as they sprinted to develop the new technology.

Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.

Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI, former OpenAI employee Daniel Kokotajlo, one of the letters organizers, said in a statement.

I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

Government and private sector researchers worry US adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons.

With Post Wires

Original post:

Apple agrees to adopt AI safeguards following in footsteps of tech rivals - New York Post

Read More..

Why artificial intelligence often struggles with math – The Times of India

In the school year that ended recently, one class of learners stood out as a seeming puzzle. They are hardworking, improving and remarkably articulate. But curiously, these learners - artificially intelligent chatbots - often struggle with math. Chatbots such as Open AI's ChatGPT can write poetry, summarize books and answer questions, often with human-level fluency. These systems can do math, based on what they have learned, but the results can vary and be wrong. They are fine-tuned for determining probabilities, not doing rules-based calculations. Likelihood is not accuracy, and language is more flexible, and forgiving, than math. "The AI chatbots have difficulty with math because they were never designed to do it," said Kristian Hammond, a computer science professor and AI researcher at Northwestern University. The world's smartest computer scientists, it seems, have created AI that is more liberal arts major than numbers whiz. That, on the face of it, is a sharp break with computing's past. Since the early computers appeared in the 1940s, a good summary definition of computing has been "math on steroids." They have been tireless, fast, accurate calculating machines. Yet, all past efforts at AI did hit a wall. Then, over a decade ago, a different approach began to deliver striking gains. The underlying technology, called a neural network, loosely modelled on the human brain began generating language, based on all the information it has absorbed, by predicting what word or phrase is most likely to come next - much as humans do. But at times, AI chatbots have stumbled with simple arithmetic and math word problems that require multiple steps to reach a solution, something recently documented by some technology reviewers. The AI's proficiency is getting better, but it remains a shortcoming. Speaking at a recent symposium, Kristen DiCerbo, chief learning officer of Khan Academy, an education nonprofit that is experimenting with an AI chatbot tutor and teaching assistant, introduced the subject of math accuracy. "It is a problem, as many of you know," DiCerbo told the educators. A few months ago, Khan Academy made a significant change to its AI-powered tutor, called Khanmigo. It sends many numerical problems to a calculator program instead of asking the AI to solve the math. While waiting for the calculator program to finish, students see the words "doing math" on their screens and a Khanmigo icon bobbing its head. "We're actually using tools that are meant to do math," said DiCerbo, who remains optimistic that conversational chatbots will play an important role in education. For more than a year, ChatGPT has used a similar workaround for some math problems. For tasks such as large-number division and multiplication, the chatbot summons help from a calculator program. Math is an "important ongoing area of research," OpenAI said in a statement, and a field where its scientists have made steady progress. Its new version of GPT achieved nearly 64% accuracy on a public database of thousands of problems requiring visual perception and mathematical reasoning, the company said. That is up from 58% for the previous version. The technology's erratic performance in math adds grist to a spirited debate in the AI community about the best way forward in the field. Broadly, there are two camps. On one side are those who believe that the advanced neural networks, known as large language models, that power AI chatbots are almost a singular path to steady progress and eventually to artificial general intelligence, or AGI, a computer that can do anything the human brain can do. That is the dominant view in much of Silicon Valley. But there are skeptics who question if adding more data and computing power to the large language models is enough. Prominent among them is Yann LeCun, chief AI scientist at Meta. The large language models, LeCun has said, have little grasp of logic and lack common-sense reasoning. What's needed, he insists, is a broader approach, which he calls "world modelling," or systems that can learn how the world works much as humans do. And it may take a decade or so to achieve.

Go here to see the original:

Why artificial intelligence often struggles with math - The Times of India

Read More..

Enhancing cellular immunotherapies in cancer by engineering selective therapeutic resistance – Nature.com

June, C. H. & Sadelain, M. Chimeric antigen receptor therapy. N. Engl. J. Med. 379, 6473 (2018).

Article CAS PubMed PubMed Central Google Scholar

Maude, S. L. et al. Chimeric antigen receptor T cells for sustained remissions in leukemia. N. Engl. J. Med. 371, 15071517 (2014).

Article PubMed PubMed Central Google Scholar

Marin, D. et al. Safety, efficacy and determinants of response of allogeneic CD19-specific CAR-NK cells in CD19+ B cell tumors: a phase 1/2 trial. Nat. Med. 30, 772784 (2024).

Article CAS PubMed PubMed Central Google Scholar

Leidner, R. et al. Neoantigen T-cell receptor gene therapy in pancreatic cancer. N. Engl. J. Med. 386, 21122119 (2022).

Article CAS PubMed PubMed Central Google Scholar

Ghorashian, S. et al. Enhanced CAR T cell expansion and prolonged persistence in pediatric patients with ALL treated with a low-affinity CD19 CAR. Nat. Med. 25, 14081414 (2019).

Article CAS PubMed Google Scholar

Narayan, V. et al. PSMA-targeting TGF-insensitive armored CAR T cells in metastatic castration-resistant prostate cancer: a phase 1 trial. Nat. Med. 28, 724734 (2022).

Article CAS PubMed PubMed Central Google Scholar

Albelda, S. M. CAR T cell therapy for patients with solid tumours: key lessons to learn and unlearn. Nat. Rev. Clin. Oncol. 21, 4766 (2024).

Article PubMed Google Scholar

Parente-Pereira, A. C. et al. Synergistic chemoimmunotherapy of epithelial ovarian cancer using ErbB-retargeted T cells combined with carboplatin. J. Immunol. 191, 24372445 (2013).

Article CAS PubMed Google Scholar

Lee, Y. G. et al. Modulation of BCL-2 in both T cells and tumor cells to enhance chimeric antigen receptor T-cell immunotherapy against cancer. Cancer Discov. 12, 23722391 (2022). This study demonstrated that drug resistance mutations can enable CAR T cell combination therapy with venetoclax.

Article CAS PubMed PubMed Central Google Scholar

Valton, J. et al. A multidrug-resistant engineered CAR T cell for allogeneic combination immunotherapy. Mol. Ther. 23, 15071518 (2015).

Article CAS PubMed PubMed Central Google Scholar

Wu, X. et al. Combined antitumor effects of sorafenib and GPC3-CAR T cells in mouse models of hepatocellular carcinoma. Mol. Ther. 27, 14831494 (2019).

Article CAS PubMed PubMed Central Google Scholar

Wang, A. X., Ong, X. J., DSouza, C., Neeson, P. J. & Zhu, J. J. Combining chemotherapy with CAR-T cell therapy in treating solid tumors. Front. Immunol. 14, 1140541 (2023).

Article CAS PubMed PubMed Central Google Scholar

Gill, S. et al. Anti-CD19 CAR T cells in combination with ibrutinib for the treatment of chronic lymphocytic leukemia. Blood Adv. 6, 57745785 (2022).

Article CAS PubMed PubMed Central Google Scholar

Grosser, R., Cherkassky, L., Chintala, N. & Adusumilli, P. S. Combination immunotherapy with CAR T cells and checkpoint blockade for the treatment of solid tumors. Cancer Cell 36, 471482 (2019).

Article CAS PubMed PubMed Central Google Scholar

Rezaei, R. et al. Combination therapy with CAR T cells and oncolytic viruses: a new era in cancer immunotherapy. Cancer Gene Ther. 29, 647660 (2022).

Article CAS PubMed Google Scholar

Tebas, P. et al. Gene editing of CCR5 in autologous CD4 T cells of persons infected with HIV. N. Engl. J. Med. 370, 901910 (2014).

Article CAS PubMed PubMed Central Google Scholar

Jinek, M. et al. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science 337, 816821 (2012).

Article CAS PubMed PubMed Central Google Scholar

Anzalone, A. V., Koblan, L. W. & Liu, D. R. Genome editing with CRISPRCas nucleases, base editors, transposases and prime editors. Nat. Biotechnol. 38, 824844 (2020).

Article CAS PubMed Google Scholar

Doudna, J. A. The promise and challenge of therapeutic genome editing. Nature 578, 229236 (2020).

Article CAS PubMed PubMed Central Google Scholar

Wang, J. Y. & Doudna, J. A. CRISPR technology: a decade of genome editing is only the beginning. Science 379, eadd8643 (2023).

Article CAS PubMed Google Scholar

Gao, C. Genome engineering for crop improvement and future agriculture. Cell 184, 16211635 (2021).

Article CAS PubMed Google Scholar

Cai, P., Gao, J. & Zhou, Y. CRISPR-mediated genome editing in non-conventional yeasts for biotechnological applications. Microb. Cell Fact. 18, 63 (2019).

Article PubMed PubMed Central Google Scholar

Wellhausen, N., Agarwal, S., Rommel, P. C., Gill, S. I. & June, C. H. Better living through chemistry: CRISPR/Cas engineered T cells for cancer immunotherapy. Curr. Opin. Immunol. 74, 7684 (2022).

Article CAS PubMed Google Scholar

Katti, A., Diaz, B. J., Caragine, C. M., Sanjana, N. E. & Dow, L. E. CRISPR in cancer biology and therapy. Nat. Rev. Cancer 22, 259279 (2022).

Article CAS PubMed Google Scholar

Zhang, L. et al. The construction of drug-resistant cancer cell lines by CRISPR/Cas9 system for drug screening. Sci. Bull. 63, 14111419 (2018).

Article CAS Google Scholar

Ma, L. et al. CRISPR-Cas9mediated saturated mutagenesis screen predicts clinical drug resistance with improved accuracy. Proc. Natl Acad. Sci. USA 114, 1175111756 (2017).

Article CAS PubMed PubMed Central Google Scholar

Wellhausen, N. et al. Epitope base editing CD45 in hematopoietic cells enables universal blood cancer immune therapy. Sci. Transl. Med. 15, eadi1145 (2023). This work demonstrated, for the first time, that CD45 can be druggable without haematopoietic toxicity by engineering the targeted epitope on CD45 in HSCs, thus enabling anti-CD45-directed therapies as universal blood cancer therapies.

Article CAS PubMed PubMed Central Google Scholar

Qasim, W. Genome-edited allogeneic donor universal chimeric antigen receptor T cells. Blood 141, 835845 (2023).

Article CAS PubMed Google Scholar

Casirati, G. et al. Epitope editing enables targeted immunotherapy of acute myeloid leukaemia. Nature 621, 404414 (2023). This work demonstrated, for the first time, that engineering epitopes of functionally relevant cell surface receptors can prevent on-target, off-tumour toxicities of CAR T cells.

Article CAS PubMed PubMed Central Google Scholar

Marone, R. et al. Epitope-engineered human hematopoietic stem cells are shielded from CD123-targeted immunotherapy. J. Exp. Med. 220, e20231235 (2023).

Article CAS PubMed PubMed Central Google Scholar

Mondal, N. et al. Non-genotoxic conditioning for hematopoietic stem cell transplant through engineered stem cell antibody paired evasion (ESCAPE). Blood 142, 7128 (2023).

Article Google Scholar

Khorashad, J. S. et al. BCR-ABL1 compound mutations in tyrosine kinase inhibitorresistant CML: frequency and clonal relationships. Blood 121, 489498 (2013).

Article CAS PubMed PubMed Central Google Scholar

Yun, C.-H. et al. The T790M mutation in EGFR kinase causes drug resistance by increasing the affinity for ATP. Proc. Natl Acad. Sci. USA 105, 20702075 (2008).

Article CAS PubMed PubMed Central Google Scholar

Woyach, J. A. et al. Resistance mechanisms for the Brutons tyrosine kinase inhibitor ibrutinib. N. Engl. J. Med. 370, 22862294 (2014).

Article PubMed PubMed Central Google Scholar

Awad, M. M. et al. Acquired resistance to crizotinib from a mutation in CD74ROS1. N. Engl. J. Med. 368, 23952401 (2013).

Article CAS PubMed Google Scholar

Robinson, D. R. et al. Activating ESR1 mutations in hormone-resistant metastatic breast cancer. Nat. Genet. 45, 14461451 (2013).

Article CAS PubMed PubMed Central Google Scholar

Fresquet, V., Rieger, M., Carolis, C., Garca-Barchino, M. J. & Martinez-Climent, J. A. Acquired mutations in BCL2 family proteins conferring resistance to the BH3 mimetic ABT-199 in lymphoma. Blood 123, 41114119 (2014).

Article CAS PubMed Google Scholar

Blombery, P. et al. Acquisition of the recurrent Gly101Val mutation in BCL2 confers resistance to venetoclax in patients with progressive chronic lymphocytic leukemia. Cancer Discov. 9, 342353 (2019).

Article CAS PubMed Google Scholar

Sotillo, E. et al. Convergence of acquired mutations and alternative splicing of CD19 enables resistance to CART-19 immunotherapy. Cancer Discov. 5, 12821295 (2015).

Article CAS PubMed PubMed Central Google Scholar

ORourke, D. M. et al. A single dose of peripherally infused EGFRvIII-directed CAR T cells mediates antigen loss and induces adaptive resistance in patients with recurrent glioblastoma. Sci. Transl. Med. 9, eaaa0984 (2017).

Article PubMed PubMed Central Google Scholar

Sorrentino, B. P. et al. Selection of drug-resistant bone marrow cells in vivo after retroviral transfer of human MDR 1. Science 257, 99103 (1992). This work demonstrated, for the first time, the concept of engineering selective drug resistance in HSCs.

Article CAS PubMed Google Scholar

Neff, T., Beard, B. C. & Kiem, H.-P. Survival of the fittest: in vivo selection and stem cell gene therapy. Blood 107, 17511760 (2006).

Article CAS PubMed PubMed Central Google Scholar

Bunting, K. D., Galipeau, J., Topham, D., Benaim, E. & Sorrentino, B. P. Transduction of murine bone marrow cells with an MDR1 vector enables ex vivo stem cell expansion, but these expanded grafts cause a myeloproliferative syndrome in transplanted mice. Blood 92, 22692279 (1998).

Article CAS PubMed Google Scholar

Davis, B. M., Ko, O. N. & Gerson, S. L. Limiting numbers of G156A O6-methylguanineDNA methyltransferase-transduced marrow progenitors repopulate nonmyeloablated mice after drug selection. Blood 95, 30783084 (2000).

Article CAS PubMed Google Scholar

Zielske, S. P., Reese, J. S., Lingas, K. T., Donze, J. R. & Gerson, S. L. In vivo selection of MGMT (P140K) lentivirustransduced human NOD/SCID repopulating cells without pretransplant irradiation conditioning. J. Clin. Invest. 112, 15611570 (2003).

Article CAS PubMed PubMed Central Google Scholar

Nivens, M. C. et al. Engineered resistance to camptothecin and antifolates by retroviral coexpression of tyrosyl DNA phosphodiesterase-I and thymidylate synthase. Cancer Chemother. Pharmacol. 53, 107115 (2004).

Article CAS PubMed Google Scholar

Fu, D., Calvo, J. A. & Samson, L. D. Balancing repair and tolerance of DNA damage caused by alkylating agents. Nat. Rev. Cancer 12, 104120 (2012).

Article CAS PubMed PubMed Central Google Scholar

Nathansen, J. et al. Beyond the double-strand breaks: the role of DNA repair proteins in cancer stem-cell regulation. Cancers 13, 4818 (2021).

Article CAS PubMed PubMed Central Google Scholar

Harkey, M. A., Czerwinski, M., Slattery, J. & Kiem, H.-P. Overexpression of glutathione-S-transferase, MGSTII, confers resistance to busulfan and melphalan. Cancer Invest. 23, 1925 (2005).

Article CAS PubMed Google Scholar

Sldek, N. E., Kollander, R., Sreerama, L. & Kiang, D. T. Cellular levels of aldehyde dehydrogenases (ALDH1A1 and ALDH3A1) as predictors of therapeutic responses to cyclophosphamide-based chemotherapy of breast cancer: a retrospective study: rational individualization of oxazaphosphorine-based cancer chemotherapeutic regimens. Cancer Chemother. Pharmacol. 49, 309321 (2002).

Article PubMed Google Scholar

View post:

Enhancing cellular immunotherapies in cancer by engineering selective therapeutic resistance - Nature.com

Read More..

Two OSU engineering professors receive NSF EAGER award to study ways to decarbonize heavy industries – Oklahoma State University

Friday, July 26, 2024

Media Contact: Desa James | Communications Coordinator | 405-744-2669 | desa.james@okstate.edu

Dr. Paritosh Ramanan, assistant professor of industrial engineering and management, and Dr. Zheyu Jiang, assistant professor of chemical engineering, were recently awarded the National Science Foundation EAGER award.

This two-year grant from the National Science Foundation serves as part of the NSF-wide Clean Energy Technology initiative to develop potentially transformative, convergent, fundamental solutions in clean energy technologies. The EAGER funding mechanism supports exploratory work in its preliminary stages on untested, but potentially transformative, research ideas or approaches.

In this newly funded project, titled "EAGER: CET: Decentralized Algorithms for Integrating Decarbonized Chemical Process Heating with Renewable-driven, Electric Power Systems," Ramanan and Jiang were awarded $299,050 to study systematic ways to more safely and effectively integrate renewable-driven electric power systems with the chemical and refining industries.

The U.S. manufacturing sector accounts for 20% of the countrys primary energy usage and greenhouse gas emissions. In particular, chemical and refining industries are responsible for nearly half of the manufacturing sectors primary energy consumption and GHG emissions, most of which are used exclusively for process heating. As the U.S. energy landscape continues to transition toward clean, renewable electricity, chemical process heating is also actively seeking electrification.

To support this ongoing trend, electrification of chemical process heating needs to be integrated with clean energy systems and accompanied by decarbonization of the electric grid. This requires a robust, real-time framework to facilitate information sharing between power system stakeholders and chemical plants.

As industrial decarbonization efforts accelerate, it becomes increasingly important to enable information sharing and collaborative decision-making among stakeholders while keeping sensitive local data private. I am thrilled to collaborate with Dr. Jiang on research that addresses these privacy and computational challenges, particularly in integrating the operations of diverse stakeholders such as chemical plants and renewable-driven power systems, Ramanan said.

In this project, Ramanan and Jiang will jointly develop new computational methodologies to enable secure, real-time information sharing among different stakeholders by ensuring the privacy of data, so that chemical plants and power systems can better coordinate their operations and maintenance to achieve holistic decarbonization across all stakeholders.

I am excited to work with Dr. Ramanan to tackle this critical issue by bringing together new, cross-cutting expertise and interdisciplinary perspectives from diverse fields, including chemical engineering, power systems, computational science and machine learning, Jiang said.

Story By: Natalie Henderson | Prospective Student Services Coordinator | natalie.henderson@okstate.edu

See the original post here:

Two OSU engineering professors receive NSF EAGER award to study ways to decarbonize heavy industries - Oklahoma State University

Read More..