Page 1,201«..1020..1,2001,2011,2021,203..1,2101,220..»

Dallas reality star accused of stealing money from gay men – Dallas Voice

Cast of A-List Dallas. James Doyle is on the right. (Photo via Logo)

Former A-List Dallas reality star James Doyle is accused of stealing thousands of dollars from unsuspecting gay men according to K-FOR News in Oklahoma City.

The reality show ran on Logo in 2011 and followed a group of self-proclaimed A-listers. Throughout and after the run of the show, cast members were in trouble with the law and were the subject of attacks, break-ins and damage to their vehicles. And the guest star on the shows finale was right-wing pundit Ann Coulter.

According to the K-FOR story, suspects met Doyle on Tinder and became friendly before Doyle told them he needed financial assistance to get out of as abusive relationship.

At one point, Doyle texted, I have never been this hungry in my life.

The amounts victims claim they sent to Doyle range from $1,200 to $20,000. One victim said he knew of at least eight men who were scammed out of money.

When one victim threatened to take Doyle to court to force him to repay the loan, Doyle threatened to make up sexual assault charges.

For the entire story, visit K-Fors website.

David Taffet

Follow this link:
Dallas reality star accused of stealing money from gay men - Dallas Voice

Read More..

The week in AI: Apple makes machine learning moves – TechCrunch

Image Credits: Apple

Keeping up with an industry as fast moving as AI is a tall order. So until an AI can do it for you, heres a handy roundup of the last weeks stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.

It could be said that last week, Apple very visibly and with intention, threw its hat into the ultra-competitive AI race. Its not that the company hadnt signaled its investments in and prioritization of AI previously. But at its WWDC event, Apple made it abundantly clear that AI was behind many of the features in both its forthcoming hardware and software.

For instance, iOS 17, which is set to arrive later this year, can suggest recipes for similar dishes from an iPhone photo using computer vision. AI also powers Journal, a new interactive diary that makes personalized suggestions based on activities across other apps.

iOS 17 will also feature an upgraded autocorrect powered by an AI model that can more accurately predict the next words and phrases that a user might use. Over time, itll become tailored, learning a users most frequently used words including swear words, entertainingly.

AI is central to Apples Vision Pro augmented reality headset, too specifically FaceTime on the Vision Pro. Using machine learning, the Vision Pro can create a virtual avatar of the wearer, interpolating out a full range of facial contortions down to the skin tension and muscle work.

It might not be generative AI, which is without a doubt the hottest subcategory of AI today. But Apples intention, it seems to me, was to mount a comeback of sorts to show that its not one to be underestimated after years of floundering machine learning projects, from the underwhelming Siri to the self-driving car in production hell.

Projecting strength isnt just a marketing ploy. Apples historical underperformance in AI has led to serious brain drain, reportedly, with The Information reporting that talented machine learning scientists including a team that had been working on the type of tech underlying OpenAIs ChatGPT left Apple for greener pastures.

Showing that its serious about AI by actuallyshipping products with AI imbued feels like a necessary move and a benchmark some of Apples competitors have, in fact, failed to meet in the recent past. (Heres looking at you, Meta.) By all appearances, Apple made inroads last week even if it wasnt particularly loud about it.

Here are the other AI headlines of note from the past few days:

If youre curious how AI might affect science and research over the next few years, a team across six national labs authored a report, based on workshops conducted last year, about exactly that. One may be tempted to say that, being based on trends from last year and not this one, in which things have progressed so fast, the report may already be obsolete. But while ChatGPT has made huge waves in tech and consumer awareness, the truth is its not particularly relevant for serious research. The larger-scale trends are, and theyre moving at a different pace. The 200-page report is definitely not a light read, but each section is helpfully divided into digestible pieces.

Elsewhere in the national lab ecosystem, Los Alamos researchers are hard at work on advancing the field of memristors, which combine data storage and processing much like our own neurons do. Its a fundamentally different approach to computation, though one that has yet to bear fruit outside the lab. This new approach appears to move the ball forward, at least.

AIs facility with language analysis is on display in this report on police interactions with people theyve pulled over. Natural language processing was used as one of several factors to identify linguistic patterns that predict escalation of stops especially with Black men. The human and machine learning methods reinforce each other. (Read the paper here.)

DeepBreath is a model trained on recordings of breathing taken from patients in Switzerland and Brazil that its creators at EPFL claim can help identify respiratory conditions early. The plan is to put it out there in a device called the Pneumoscope, under spinout company Onescope. Well probably follow up with them for more info on how the company is doing.

Another AI health advance comes from Purdue, where researchers have made software that approximates hyperspectral imagery with a smartphone camera, successfully tracking blood hemoglobin and other metrics. Its an interesting technique: using the phones super-slow-mo mode, it gets a lot of information about every pixel in the image, giving a model enough data to extrapolate from. It could be a great way to get this kind of health information without special hardware.

I wouldnt trust an autopilot to take evasive maneuvers just yet, but MIT is inching the tech closer with research that helps AI avoid obstacles while maintaining a desirable flight path. Any old algorithm can propose wild changes to direction in order to not crash, but doing so while maintaining stability and not pulping anything inside is harder. The team managed to get a simulated jet to perform some Top Gun-like maneuvers autonomously and without losing stability. Its harder than it sounds.

Last this week is Disney Research, which can always be counted on to show off something interesting that also just happens to apply to filmmaking or theme park operations. At CVPR they showed off a powerful and versatile facial landmark detection network that can track facial movements continuously and using more arbitrary reference points. Motion capture is already working without the little capture dots, but this should make it even higher quality and more dignified for the actors.

Read more:
The week in AI: Apple makes machine learning moves - TechCrunch

Read More..

Comet partners with Snowflake to enhance the reproducibility of machine learning datasets – VentureBeat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

MLOps platformComettoday announced a strategic partnership withSnowflake aimed at empowering data scientists to build superior machine learning (ML) models at an accelerated pace.

Comet said that the collaboration will enable integration of Comets solutions into Snowflakes unified platform, enabling developers to track and version their Snowflake queries and datasets within their Snowflake environment.

Comet says that this integration will enable the tracing of a models lineage and performance, offering more visibility and comprehension than with traditional development processes. It will also have an impact on model performance in response to changes in data.

Overall, the company believes, using Snowflake data in the Comet platform will result in a streamlined and more transparent model development process.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Snowflakes Data Cloud and Comets ML platform combined will allow customers to build, train, deploy and monitor models significantly faster, according to the companies.

In addition, this partnership fosters a feedback loop between model development in Comet and data management in Snowflake, Comet CEO Gideon Mendels told VentureBeat.

>>Dont miss our special issue: Building the foundation for customer data quality.<<

Mendels said that integrating such a loop can continuously improve models and bridge the gap between experimenting with models and deploying them, fulfilling the key promise of ML the ability to learn and adapt over time. He said that the clear versioning between datasets and models will enable organizations to better address data changes and their impact on models in production.

Comets new offering follows its recent release of asuite of toolsand integrations designed to accelerate workflows for data scientists working with large language models (LLMs).

When data scientists or developers execute queries to extract datasets from Snowflake for their ML models, Comet will be able to log, version and directly link these queries to the resulting models.

Mendels said this approach offers several advantages, including increased reproducibility, collaboration, auditability and iterative improvement.

The integration between Comet and Snowflake aims to provide a more robust, transparent and efficient framework for ML development by enabling the tracking and versioning of Snowflake queries and datasets within Snowflake itself, he explained. By versioning the SQL queries and datasets, data scientists can always trace back to the exact version of the data that was used to train a specific model version. This is crucial for model reproducibility.

In ML, training data is just as important as the model itself. Alterations in the data, such as introducing new features, addressing missing values, or modifying data distributions, can profoundly affect a models performance.

Comet says that by tracing a models lineage, it becomes possible to establish a connection between changes in model performance and specific alterations in the data. This not only aids in debugging and comprehending performance, it guides data quality and feature engineering.

Mendels said that tracking queries and data over time can create a feedback loop that drives continuous improvements in both the data management and the model development stages.

Model lineage can facilitate collaboration among a team of data scientists, as it allows anyone to understand a models history and how it was developed without the need for extensive documentation, said Mendels. This is particularly useful when team members leave or when new members join the team, allowing for seamless knowledge transfer.

The company claims that customers currently using Comet such as Uber, Etsy and Shopify typically report a 70% to 80% improvement in their ML velocity.

This is due to faster research cycles, the ability to understand model performance and detect issues faster, better collaboration and more, said Mendels. With the joint solution, this should increase even more as today there are still challenges in bridging the two systems. Customers save on ingress and consumption costs by keeping the data within Snowflake instead of transferring it over the wire and saving it in other locations.

Mendels said that Comet aims to establish itself as the de facto AI development platform.

Our view is that businesses will only see real value from AI after they deploy these models based on their own data, he said. Whether they are training from scratch, fine-tuning an OSS model or using context injection to ChatGPT, Comets mandate is to make this process seamless and bridge the gap between research and production.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Excerpt from:
Comet partners with Snowflake to enhance the reproducibility of machine learning datasets - VentureBeat

Read More..

BYU to offer machine learning degree this fall – KSL NewsRadio

SALT LAKE CITY Artificial intelligence may be off limits when it comes to writing college essays, but Brigham Young University is using it in a different way. BYU is getting in on the artificial intelligence craze by offering a new undergraduate degree in machine learning.

BYU Associate Professor of Computer Science David Wingate said the degree aims to help humans staff future jobs in artificial intelligence and sets students up for success.

Students will learn skills that will translate into both career opportunities at high-tech firms, but also graduate school opportunities and opportunities for advanced research in the field.

Students arent the only ones to benefit from the program though. Wingate said the new degree also empowers students to help AI programs like ChatGPT, Siri, Alexa and even Google Ads work smarter.

The degree is a combination of feeding AI programs information and improving the connections between computers to speed up how fast programs can learn.

ChatGPT was trained on trillions of words of text, on thousands of computers, working together for months at a time just to sort of learn the patterns in the data. They estimate that it probably cost millions of dollars in electricity alone to train ChatGPT.

BYU students can start signing up for the machine learning degree for the 2023 fall semester.

Have a story idea or tip? Send it to the KSL NewsRadio team here.

Original post:
BYU to offer machine learning degree this fall - KSL NewsRadio

Read More..

Machine-Learning Tool Easily Spots ChatGPTs Writing – IEEE Spectrum

Since OpenAI launched its ChatGPT chatbot in November 2022, Yet, while ChatGPT may masquerade as a human, the inaccuracy of its writing can introduce errors that could be devastating if used for serious tasks like academic writing.

A team of researchers from the University of Kansas has developed a tool to weed out AI-generated academic writing from the stuff penned by people, with over 99 percent accuracy. This work was published on 7 June in the journal Cell Reports Physical Science.

Heather Desaire, a professor of chemistry at the University of Kansas and lead author of the new paper, says that while shes been really impressed with many of ChatGPTs results, the limits of its accuracy are what led her to develop a new identification tool. AI text generators like ChatGPT are not accurate all the time, and I dont think its going to be very easy to make them produce only accurate information, she says.

In sciencewhere we are building on the communal knowledge of the planetI wonder what the impact will be if AI text generation is heavily leveraged in this domain, Desaire says. Once inaccurate information is in an AI training set, it will be even harder to distinguish fact from fiction.

After a while, [the ChatGPT-generated papers] had a really monotonous feel to them. Heather Desaire, University of Kansas

In order to convincingly mimic human-generated writing, chatbots like ChatGPT are trained on reams of real text examples. While the results are often convincing at first glance, existing machine-learning tools can reliably identify telltale signs of AI intervention, such as using less emotional language.

However, existing tools like the widely used deep-learning detector RoBERTa have limited application in academic writing, the researchers write, because academic writing is already more likely to omit emotional language. In previous studies of AI-generated academic abstracts, RoBERTa had a roughly 80 percent accuracy.

To bridge this gap, Desaire and her colleagues developed a machine-learning tool that required limited training data. To create the training data, the team collected 64 Perspectives articleswhere scientists provide commentary on new researchfrom the journal Science, and used those articles to generate 128 ChatGPT samples. These ChatGPT samples included 1,276 paragraphs of text for the researchers tool to examine.

After optimizing the model, the researchers tested it on two datasets that each contained 30 original, human-written articles and 60 ChatGPT-generated articles. In these tests, the new model was 100 percent accurate when judging full articles, and 97 and 99 percent accurate on the test sets when evaluating only the first paragraph of each article. In comparison, RoBERTa had an accuracy of only 85 and 88 percent on the test sets.

From this analysis, the team identified that sentence length and complexity were a few revealing signs of AI writing compared to humans. They also found that human writers were more likely to name colleagues in their writing, while ChatGPT was more likely to use general terms like researchers or others.

Overall, Desaire says this made for more boring writing. In general, I would say that the human-written papers were more engaging, she says. The AI-written papers seemed to break down complexity, for better or for worse. But after a while, they had a really monotonous feel to them.

The researchers hope that this work can be a proof of practice that even off-the-shelf tools can be leveraged to identify AI-generated samples without extensive machine-learning knowledge.

However, these results may be promising only in the short term. Desaire and colleagues note that this scenario is still only a sliver of the type of academic writing that ChatGPT could do. For example, if ChatGPT were asked to write a perspective article in the style of a particular human sample then it might be more difficult to spot the difference.

Desaire says that she can see a future where AI like ChatGPT is used ethically but says that tools for identification will need to continue to grow with the technology to make this possible.

I think it could be leveraged safely and effectively in the same way we use spell-check now. A basically complete draft could be edited by AI as a last-step revision for clarity, she says. If people do this, they need to be absolutely certain that no factual inaccuracies were introduced in this step, and I worry that this fact-check step may not always be done with rigor.

From Your Site Articles

Related Articles Around the Web

Read the original post:
Machine-Learning Tool Easily Spots ChatGPTs Writing - IEEE Spectrum

Read More..

Machine-learning method used for self-driving cars could improve lives of type-1 diabetes patients – Medical Xpress

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Artificial Pancreas System with Reinforcement Learning. Credit: Harry Emerson

The same type of machine learning methods used to pilot self-driving cars and beat top chess players could help type-1 diabetes sufferers keep their blood glucose levels in a safe range.

Scientists at the University of Bristol have shown that reinforcement learning, a type of machine learning in which a computer program learns to make decisions by trying different actions, significantly outperforms commercial blood glucose controllers in terms of safety and effectiveness. By using offline reinforcement learning, where the algorithm learns from patient records, the researchers improve on prior work, showing that good blood glucose control can be achieved by learning from the decisions of the patient rather than by trial and error.

Type 1 diabetes is one of the most prevalent auto-immune conditions in the UK and is characterized by an insufficiency of the hormone insulin, which is responsible for blood glucose regulation.

Many factors affect a person's blood glucose, and it therefore can be a challenging and burdensome task to select the correct insulin dose for a given scenario. Current artificial pancreas devices provide automated insulin dosing, but are limited by their simplistic decision-making algorithms.

However, a new study published in the Journal of Biomedical Informatics shows that offline reinforcement learning could represent an important milestone of care for people living with the condition. The largest improvement was in children, who experienced an additional 1.5 hours in the target glucose range per day.

Children represent a particularly important group, as they are often unable to manage their diabetes without assistance, and an improvement of this size would result in markedly better long-term health outcomes.

Lead author Harry Emerson from Bristol's Department of Engineering Mathematics, explained, "My research explores whether reinforcement learning could be used to develop safer and more effective insulin dosing strategies.

"These machine learning-driven algorithms have demonstrated superhuman performance in playing chess and piloting self-driving cars, and therefore could feasibly learn to perform highly personalized insulin dosing from pre-collected blood glucose data.

"This particular piece of work focuses specifically on offline reinforcement learning, in which the algorithm learns to act by observing examples of good and bad blood glucose control. Prior reinforcement learning methods in this area predominantly utilize a process of trial and error to identify good actions, which could expose a real-world patient to unsafe insulin doses."

Due to the high risk associated with incorrect insulin dosing, experiments were performed using the FDA-approved UVA/Padova simulator, which creates a suite of virtual patients to test type 1 diabetes control algorithms. State-of-the-art offline reinforcement learning algorithms were evaluated against one of the most widely used artificial pancreas control algorithms. This comparison was conducted across 30 virtual patients (adults, adolescents and children) and considered 7,000 days of data, with performance being evaluated in accordance with current clinical guidelines. The simulator was also extended to consider realistic implementation challenges, such as measurement errors, incorrect patient information and limited quantities of available data.

This work provides a basis for continued reinforcement learning research in glucose control; demonstrating the potential of the approach to improve the health outcomes of people with type 1 diabetes, while highlighting the method's shortcomings and areas of necessary future development.

The researchers' ultimate goal is to deploy reinforcement learning in real-world artificial pancreas systems. These devices operate with limited patient oversight and consequently will require significant evidence of safety and effectiveness to achieve regulatory approval.

Emerson added, "This research demonstrates machine learning's potential to learn effective insulin dosing strategies from the pre-collected type 1 diabetes data. The explored method outperforms one of the most widely used commercial artificial pancreas algorithms and demonstrates an ability to leverage a person's habits and schedule to respond more quickly to dangerous events."

More information: Harry Emerson et al, Offline reinforcement learning for safer blood glucose control in people with type 1 diabetes, Journal of Biomedical Informatics (2023). DOI: 10.1016/j.jbi.2023.104376

Follow this link:
Machine-learning method used for self-driving cars could improve lives of type-1 diabetes patients - Medical Xpress

Read More..

The 10 Hottest Data Science And Machine Learning Tools Of 2023 … – CRN

Software News Rick Whiting June 16, 2023, 11:49 AM EDT

Data science and machine learning technologies are in big demand as businesses look for ways to analyze big data and automate data-focused processes. Here are 10 startups with leading-edge data science and machine learning technology that have caught our attention (so far) this year.

Tool Time

Data volumes continue to explode with the global datasphere the total amount of data created, captured, replicated and consumed growing at more than 20 percent a year to reach approximately 291 zettabytes in 2027, according to market researcher IDC.

Efforts by businesses and organizations to derive value from all that data is fueling demand for data science tools and technologies for developing data analysis strategies, preparing data for analysis, developing data visualizations and building data models. (Data science is a field of study that uses a scientific approach to extract meaning and insights from data.)

And more of that data is being used to power machine learning projects, which are becoming ubiquitous within enterprise businesses as they build machine learning models and connect them to operational applications and software features such as personalization and natural language interfaces, notes Daniel Treiman, ML engineering lead at ML platform developer Predibase, in a list of ML predictions for 2023.

All this is spurring demand for increasingly sophisticated data science and machine learning tools and platforms. What follows is a look at 10 hot data science and machine learning tools designed to meet those demands.

Some are from industry giants and more established IT vendors while many are from startups focused exclusively on the data science and machine learning sectors. Some of these are new products introduced over the last year while others are new releases of tools and platforms that offer expanded capabilities to meet the latest demands of this rapidly changing space.

Rick Whiting has been with CRN since 2006 and is currently a feature/special projects editor.Whiting manages a number of CRNs signature annual editorial projects including Channel Chiefs, Partner Program Guide, Big Data 100, Emerging Vendors, Tech Innovators and Products of the Year. He also covers the Big Data beat for CRN. He can be reached at rwhiting@thechannelcompany.com.

Go here to read the rest:
The 10 Hottest Data Science And Machine Learning Tools Of 2023 ... - CRN

Read More..

How machine learning and new AI technologies could change the … – jacksonprogress-argus

The world has never been more online. From work meetings, emails, and texts to shopping, paying bills, and bankingthe possibilities are endless. Technological advances save people time and give companies new tools for growth.

But that connectedness comes with a cost. The internet has also never been more rife with criminals looking for vulnerabilities to exploit, hoping to hold companies hostage with ransomware, executing crafty phishing and social engineering attacks, hacking into proprietary information, or capturing private data such as Social Security numbers and addresses.

Does artificial intelligence or machine learning make it easier or harder for companies to guard against such attacks? Can other improvements provide additional defenses against cyberattacks?

The pros and cons of some of the advances have recently burst into the news. Geoffrey Hinton, the "Godfather of AI,"recently left Google to warn of the dangers of the very technology he helped develop. He worries that generative artificial intelligencewhich can produce text, images, video, and audiowill be used for misinformation and someday even eclipse humans' creativity. Others say those fears are hypothetical.

Drata compiled a list of five technological innovations changing how firms monitor and protect sensitive data essential to their digital operations.

Read the original here:
How machine learning and new AI technologies could change the ... - jacksonprogress-argus

Read More..

Deep Learning: AI, Art History, and the Museum | Magazine – MoMA

MK: Whats interesting is that in Anadols work, older and newer technologies come together in a kind of collage. He customizes a GAN, which is not even the most advanced type of generative AI now, and combines it with different types of rendering software and new mapping algorithms. And he does this in order to play with the degree of efficacy of these different algorithms: to explore different degrees of human intervention and control and different degrees of letting go, letting the machine conjure its own interpretations of the archive and of human memory.

The title of Unsupervised is very specific: it refers to a technical term for a type of machine learning that Anadol and his studio use. Most machine learning is supervised learning, where the AI needs to try to classify the information that is before it. (Even that is still difficult: We have autonomous cars, but they still cant distinguish between the moon and a stop sign.) In supervised learning, humans tag images, for example, or bits of information, in order to train a machine learning model. So, I would go through the data set, and I would tag an image of a pen with the word pen.

Unsupervised learning, on the other hand, is where the machine does the tagging itself. Its a whole other kind of black box where the machine is actually deciding not only how to tag something, what kinds of properties something should be classified as possessing, but its also deciding in many ways what is meaningful, what is of value, in terms of information. This is already opening up a kind of agency on the part of the machine that is very different from traditional processes of supervised learning.

Anadol is using unsupervised learning so that the work can actually generate something new based on its learnings, rather than just classify and process. And then the artist is in his studio working with this model, almost like an electronic musician with lots of different dials in front of him, adjusting what kinds of learning takes place, the rate at which the learning takes place, thousands and thousands of parameters around what kind of forms it could generate.

But at the same time, theres a huge gulf between that stage and getting to the point of creating something that looks the way the works in Unsupervised do. Theres so much intervention and, in fact, human collaboration. Because with machine learning, you often might get noise. It doesnt necessarily generate anything that we find meaningful or that we actually could perceive. And so, Anadol is working in concert with this quasi-organic, changing, adaptive systembut also guiding it away from what it might normally optimize, or think, to produce a series of morphologies that is unpredictable but neither simply a chance occurrence, nor fully automated. Theres an interplay between probability and indeterminacy.

And then, layered on top of that, in two of the works, is a diagram, a visualization, of the AIs movements in space, moving throughout that complex map or galaxy it has constructed based on everything its learned, and classified, and clustered according to patterns of affinities. Again, these are affinities that we may never even perceive or think of, building a very complex mapin this case, literally 1,024 dimensions. This is not perceivable by human eyes. But what Anadol is doing is creating a map of movement that we can perceive, either as a network of shifting, connected lines, or as four-dimensional fluid dynamics, which looks like a rushing waterfall. Its almost as if youre watching a dance unfold in real time, but the choreographic score is being overlaid on top of the dance.

Read the original:
Deep Learning: AI, Art History, and the Museum | Magazine - MoMA

Read More..

Top Python AI and Machine Learning Libraries – TechRepublic

Learn about some of the best Python libraries for programming Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL).

A lot of software developers are drawn to Python due to its vast collection of open-source libraries. Lately, there have been a lot of libraries cropping up in the realm of Machine Learning (ML) and Artificial Intelligence (AI). These libraries can be readily employed by programmers of all levels for tasks in data science, image and data manipulation, and much more. This programming tutorial will shed some light on why Python is the preferred language for Machine Learning and AI as well as list some of the best ML and AI libraries to choose from.

Lead developer for Numerical Python and Pyfort, Paul Dubois, once stated that Python is the most powerful language you can still read.. Other qualities that have helped propel Python to its current station is its versatility and flexibility, which allows Python to be used alongside other programming languages when needed, including powerhouses like Java and C#. On top of that, Python can operate on nearly all OS and platforms on the market.

That might explain Pythons enduring popularity among developers, but why are so many of them choosing Python to work with ML and AI libraries? For starters, the sheer number of ML and AI libraries that are available means that developers can count on finding one for whatever problem needs solving. Moreover, being an Object-oriented programming (OOP) language, Python lends itself particularly well to efficient data use and manipulation.

Here are a few more reasons why Python is among the top programming languages for Machine Learning, Deep Learning, and Artificial Intelligence:

Now that we have discussed why Python is one of the top programming languages, the rest of this article will present some of best python libraries for Machine Learning and AI.

SEE: How to become a Machine Learning Engineer cheat sheet

Formerly known as Numeric, NumPy was the brainchild of Jim Hugunin, along with contributions from several other developers. In 2005, NumPy was officially born when Travis Oliphant incorporated features of the competing Numarray into Numeric, with extensive modifications. Today, NumPy is completely open-source and has many contributors. It is also widely regarded as the best Python library for Machine Learning and AI.

NumPy is mostly utilized by data scientists to perform a variety of mathematical operations on large, multi-dimensional arrays and matrices. NumPy arrays require far less storage area than other Python lists, and they are faster and more convenient to use, making it a great option to increase the performance of Machine Learning models without too much work. Another attractive feature is that NumPy has tools for integrating C, C++, and Fortran code.

Some of NumPys other features that make it popular amongst the scientific community include:

NumPy (see above) is so popular that several libraries are based on it, including SciPy. Like its inspiration, SciPy is also a free, and open-source library. SciPy is geared towards large data sets, as well as the performing of scientific and technical computing against those data sets. SciPy also comes with embedded modules for array optimization and linear algebra, just like NumPy. Playing a key role in scientific analysis and engineering, SciPy has grown to become one of the foundational Python libraries.

The allure of SciPy is that it takes all of NumPys functions and turns them into user-friendly, scientific tools. As such, it is often used for image manipulation and provides basic processing features for high-level, non-scientific mathematical functions.

The main features of SciPy include:

TensorFlow is a free and open source library that is available for Python, JavaScript, C++, and Java. This flexibility lends itself to a wide range of applications in many different sectors. Developed by the Google Brain team for internal Google use in research and production, the initial version was released under the Apache License 2.0 in 2015. Google released the updated version of TensorFlow, named TensorFlow 2.0, in September 2019.

Although TensorFlow can be used for a range of tasks, its particularly adept at the training and inference of deep neural networks. Using TensorFlow, developers can create and train ML models on not just computers but also mobile devices and servers by using TensorFlow Lite and TensorFlow Serving. These alternatives offer the same benefits but for mobile platforms and high-performance servers.

Some of the areas in ML and DL where TensorFlow excels are:

This tutorial shed some light on why Python is the preferred language for Machine Learning and AI and listed some of the best ML and AI libraries to choose from, including TensorFlow, SciPy, and NumPy. We will be adding to this list in the coming weeks so be sure to check back often.

SEE: Learn how to build AI powered software

Go here to read the rest:
Top Python AI and Machine Learning Libraries - TechRepublic

Read More..