Page 411«..1020..410411412413..420430..»

INTRODUCING | Binance Launches a One-Stop Solution for Inscribing and Trading BRC-20 Inscription Tokens – bitcoinke.io

Binance, the worlds largest exchange, has introduced an in-app Inscriptions Marketplace that supports Bitcoin BRC-20 tokens, including ORDI ($ORDI).

As per an announcement by the crypto exchange, the Binance Inscriptions Marketplace serves as a one-stop solution integrated into the Binance Web3 Wallet. This feature empowers users to inscribe and trade various inscriptions, including BRC-20 tokens and Ethereum Virtual Machine (EVM) tokens.

The BRC-20 is an experimental token standard designed for fungible tokens on the Bitcoin blockchain.

In contrast to Ethereums ERC-20 tokens, which leverage the blockchains smart contracts, BRC-20 tokens utilize Ordinals inscriptions. This protocol allows for the inscription of data on individual Satoshis (Sats), which are the smallest denominations of Bitcoin.

The introduction of BRC-20 tokens has facilitated the development of meme coins built on the Bitcoin blockchain. Tokens like $ORDI, $SATS, and $PIZA have contributed to the collective market capitalization of BRC-20 tokens surpassing $2.8 billion, according to data from CoinGecko.

Ordinals and BRC-20 tokens have sparked controversy among the Bitcoin community. Bitcoin Core developer, Luke Dashjr, has contended that inscriptions are exploiting a vulnerability in Bitcoin Core leading to an increase in blockchain spam and subsequently causing a surge in transaction fees.

Despite facing criticism, BRC-20 tokens are displaying resilience and continued adoption. Binance has joined wallets like Phantom in supporting this new token standard. Notably, ORDI, the most valuable BRC-20 token in terms of market capitalization, holds a position within the top 100 cryptocurrencies overall, according to data from CoinGecko.

The Inscriptions Marketplace is designed with accessibility and convenience in mind. It caters to all, from BRC-20 enthusiasts to those new to Web3, providing a secure and enjoyable experience. The introduction of the Inscriptions Marketplace is a welcome development for Binance Web3 Wallet, enhancing users journey through the decentralized web with its array of features.

Binance

Follow us onTwitterfor latest posts and updates

Join and interact with ourTelegram community

________________________________________

________________________________________

See the original post:

INTRODUCING | Binance Launches a One-Stop Solution for Inscribing and Trading BRC-20 Inscription Tokens - bitcoinke.io

Read More..

Ethereum co-founder Vitalik Buterin, net worth S$739 million, spotted in S’pore at MRT station – Mothership.sg

What are the odds of bumping into a multi-millionaire in Singapore?

And what are the odds of bumping into a multi-millionaire while waiting to take the MRT train in Singapore?

That was what one commuter here did, as he spotted Russian-Canadian computer programmer, and co-founder of cryptocurrency Ethereum, Vitalik Buterin, at King Albert Park MRT station.

The 30-year-old, who hails from Russia before his family emigrated to Canada when he was six, worked on developing the cryptocurrency full-time before he turned 20 and has been credited for being involved with the token early in its inception.

His net worth is currently at least US$550 million (S$739 million) and mainly held in the ETH token he personally co-developed.

At cryptocurrencies' peak, Buterin became the youngest crypto billionaire at the age of 27.

He was also spotted on the streets in Singapore, as per another two photos posted on Facebook on Feb. 21.

Buterin's unassuming character has won him high praise online.

One tweet, in response to Buterin's public transportation-using ways, read: "A developed country is not a place where the poor have cars. It's where the rich use public transportation."

Buterin's philanthropy has been well-documented.

He is known to not be a hoarder of other cryptocurrency.

In May 2021, he donated US$1.14 billion worth of cryptocurrency to the India Covid-Crypto Relief Fund.

The donation consisted of 500 ETH and over 50 trillion SHIB (Shiba Inu), a meme coin, that was gifted to him.

This donation was 5 per cent of the meme coin in circulation and caused a 50 per cent crash in the price at the time as it caused panic among holders of the coin.

Around the same time, he also donated US$336 million worth of Dogelon Mars ($ELON), which had previously been gifted to him as well, to the Methuselah Foundation, which focuses on extending human lifespan.

Buterin's donation of this meme coin caused a 70 per cent drop in its value.

Top photo via Asan Kayo & @dAAAb X

Originally posted here:

Ethereum co-founder Vitalik Buterin, net worth S$739 million, spotted in S'pore at MRT station - Mothership.sg

Read More..

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily

Denver, USA, February 23rd, 2024, Chainwire

The Decentralized AGI Summit, organized by Sentient and Symbolic Capital, will bring together top thought leaders in Decentralized AI like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, and Sreeram Kannan.

As the development of artificial general intelligence (AGI) systems accelerates, there are growing concerns that centralized AI controlled by a small number of actors poses a major threat to humanity. The inaugural Decentralized AGI Summit will bring together top experts in AI and blockchain like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, Sreeram Kannanm, and more, to explore how decentralized, multi-stakeholder governance models enabled by blockchain technology can help make the development of AGI safer, more transparent and aligned with the greater good.

The rapid acceleration of centralized AI and its integration into everyday life has led humanity to a crossroads between two future worlds, says Sandeep Nailwal. On the one hand, we have the choice of a Closed World. This world is controlled by few, closed-source models run by massive mega corporations. On the other hand, we have the choice of an Open World. In this world, models are default open-source, inference is verifiable, and value flows back to the stakeholders. The Open World is the world we want to live in, but it is only possible by leveraging blockchain to make AI more transparent and just.

The Decentralized AGI Summit will take place on Monday, February 26th from 3-9pm MST. It is free and open to the public to attend at: https://decentralizedagi.org/.

We are excited to help facilitate this important discussion around the development of safe and ethical AGI systems that leverage decentralization and multi-stakeholder governance, said Kenzi Wang, Co-Founder and General Partner at Symbolic Capital. Bringing luminaries across both the AI and web3 domains together will help push forward thinking on this critical technological frontier.

Featured keynote speakers include:

Vitalik Buterin, Co-Founder of Ethereum Foundation

Sandeep Nailwal, Co-Founder of Polygon Labs

Illia Polosukhin, Co-Founder of Near Foundation

Sreeram Kannan, Founder of Eigenlayer

Topics will span technical AI safety research, governance models for AGI systems, ethical considerations, and emerging use cases at the intersection of AI and blockchain. The summit aims to foster collaboration across academic institutions, industry leaders and the decentralized AI community.

For more details and to register, visit https://decentralizedagi.org/.

About Sentient

Sentient is building a decentralized AGI platform. Sentients team is comprised of leading web3 founders, builders, researchers, and academics who are committed to creating trustless and open artificial intelligence models.

Learn more about Sentient here: https://sentient.foundation/

About Symbolic Capital

Symbolic Capital is a people-driven investment firm supporting the best web3 projects globally. Our team has founded and led some of the most important blockchain companies in the world, and we leverage this background to provide unparalleled support to the companies in our portfolio.

Learn more about Symbolic Capital here: https://www.symbolic.capital/

Sam Lehman[emailprotected]

See the original post:

Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily

Read More..

Vitalik Buterin Comes to Taiwan as Keynote Speaker at ETHTaipei 2024 – U.Today

ETHTaipei 2024, the annual Ethereum developer conference and hackathon, returns from March 21st to 24th. This four-day event invites developers worldwide to explore the latest in Ethereum technology through insightful talks, hands-on workshops, and a competitive hackathon, all featuring renowned experts.

Vitalik Buterin, the founder of Ethereum, will be at ETHTaipei 2024 in person. In his highly anticipated speech, Vitalik will delve into the latest advancements and future directions of the Ethereum ecosystem. Following the Dencun upgrade with the network formally running EIP-4844, Vitalik will also timely join a panel among leaders of Layer 2s to discuss the progress of scaling Ethereum.

The four-day event will feature international conferences and a hackathon.

The conference agenda features in-depth sessions on key themes, focusing on ZK, DeFi, and Security. Learn from more than 40 acclaimed speakers representing Ethereum Foundation, Geth, Perpetual Protocol, Gnosis, Dedaub, and more as they cover Ethereum upgrades, MEV, Layer2, Autonomous World, and other trending topics. Interactive panel discussions will facilitate knowledge sharing and spark thought-provoking debates among industry leaders.

The hackathon will kick off on the evening of March 22nd. A 24-hour Hacker House will be provided on-site. Developers will need to complete their work within 40 hours based on the selected topics to compete for prizes. The hackathon is free to register. During the hacking period, there will be various workshops for developers to not only learn from each other but also have the opportunity to directly communicate withmembers from the top projects in the industry. This year, job matching opportunities will be open, giving developers the chance to not only earn prizes but also get job offers.

ETHTaipei proudly collaborates with renowned companies like Nuvo, XY Finance, Consensys, MintClub, Ora, BTSE, Lita, Zircuit, Harvest Finance, Dyson Finance, Term Structure and more. These industry leaders will showcase their latest blockchain products and engage with developers through demos, workshops, and discussions, fostering innovation and propelling the blockchain ecosystem forward.

For all developers seeking to expand their knowledge of Ethereum's exciting new technologies and trends, ETHTaipei 2024 presents a golden opportunity to learn, collaborate, and network with top minds in the field.

ETHTaipei 2024 Event Information

Date: March 21-24, 2024 Location: POPUP Taipei Tickets and event information:https://ethtaipei.org/

Media Contact

ETHTaipei Team ethtaipei23@gmail.com Hana Chang +886-975-856-705

Link:

Vitalik Buterin Comes to Taiwan as Keynote Speaker at ETHTaipei 2024 - U.Today

Read More..

Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

DOWNLOAD: This generative AI guide from TechRepublic Premium.

Generative AI uses a computing process known as deep learning to analyze patterns in large sets of data and then replicates this to create new data that appears human-generated. It does this by employing neural networks, a type of machine learning process that is loosely inspired by the way the human brain processes, interprets and learns from information over time.

To give an example, if you were to feed lots of fiction writing into a generative AI model, it would eventually gain the ability to craft stories or story elements based on the literature its been trained on. This is because the machine learning algorithms that power generative AI models learn from the information theyre fed in the case of fiction, this would include elements like plot structure, characters, themes and other narrative devices.

Generative AI models get more sophisticated over time the more data a model is trained on and generates, the more convincing and human-like its outputs become.

The popularity of generative AI has exploded in recent years, largely thanks to the arrival of OpenAIs ChatGPT and DALL-E models, which put accessible AI tools into the hands of consumers.

Since then, big tech companies including Google, Microsoft, Amazon and Meta have launched their own generative AI tools to capitalize on the technologys rapid uptake.

Various generative AI tools now exist, although text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding a prompt into the engine that guides it towards producing some sort of desired output, be it text, an image, a video or a piece of music, though this isnt always the case.

Examples of generative AI models include:

Various types of generative AI models exist, each designed for specific tasks and purposes. These can broadly be categorized into the following types.

Transformer-based models are trained on large sets of data to understand the relationships between sequential information like words and sentences. Underpinned by deep learning, transformer-based models tend to be adept at natural language processing and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Gemini are examples of transformer-based generative AI models.

Generative adversarial networks are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generators role is to generate convincing output, such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. DALL-E and Midjourney are examples of GAN-based generative AI models.

Variational autoencoders leverage two networks to interpret and generate data in this case, an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data but isnt entirely the same.

One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of peoples faces into a few important characteristics such as the size and shape of the eyes, nose, mouth, ears and so on and then use these to create new faces.

This type of VAE might be used to, say, increase the diversity and accuracy of facial recognition systems. By using VAEs to generate new faces, facial recognition systems can be trained to recognize more diverse facial features, including those that are less common.

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 3 and OpenAIs GPT-4 are examples of multimodal models.

ChatGPT is an AI chatbot developed by OpenAI. Its a large language model that uses transformer architecture specifically, the generative pretrained transformer, hence GPT to understand and generate human-like text.

You can learn everything you need to know about ChatGPT in this TechRepublic cheat sheet.

Google Gemini (previously Bard) is another example of an LLM based on transformer architecture. Similar to ChatGPT, Gemini is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAIs ChatGPT and Microsofts Copilot AI tool. It was launched in Europe and Brazil later that year.

Learn more about Gemini by reading TechRepublics comprehensive Google Gemini cheat sheet.

SEE: Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT? (TechRepublic)

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can help automate specific tasks and focus employees time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and insights into how well certain business processes are or are not performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing, and potentially more. Again, the key proposed advantage is efficiency, because generative AI tools can help users reduce the time they spend on certain tasks and invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important; we explain why later in this article.

McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

SEE: Indeeds 10 Highest-Paid Tech Skills: Generative AI Tops the List

Generative AI has found a foothold in a number of industry sectors and is now popular in both commercial and consumer markets. The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

In terms of role-specific use cases of generative AI, some examples include:

A major concern around the use of generative AI tools and particularly those accessible to the public is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation.

SEE: Gartner analysts take on 5 ways generative AI will impact culture & society

The risk of legal and financial repercussions from the misuse of generative AI is also very real; indeed, it has been suggested that generative AI could put national security at risk if used improperly or irresponsibly.

These risks havent escaped policymakers. On Feb. 13, 2024, the European Council approved the AI Act, a first-of-kind piece of legislation designed to regulate the use of AI in Europe. The legislation takes a risk-based approach to regulating AI, with some AI systems banned outright.

Security agencies have made moves to ensure AI systems are built with safety and security in mind. In November 2023, 16 agencies including the U.K.s National Cyber Security Centre and the U.S. Cybersecurity and Infrastructure Security Agency released the Guidelines for Secure AI System Development, which promote security as a fundamental aspect of AI development and deployment.

Generative AI has prompted workforce concerns, most notably that the automation of tasks could lead to job losses. Research from McKinsey suggests that, by 2030, around 12 million people may need to switch jobs, with office support, customer service and food service roles most at risk. The consulting firm predicts that clerks will see a decrease of 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.

SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances

Generative AI and general AI represent different sides of the same coin; both relate to the field of artificial intelligence, but the former is a subtype of the latter.

Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data.

General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction think Disney Pixars WALL-E, Sonny from 2004s I, Robot or HAL 9000, the malevolent AI from 2001: A Space Odyssey. Most current AI systems are examples of narrow AI, in that theyre designed for very specific tasks.

To learn more about what artificial intelligence is and isnt, read our comprehensive AI cheat sheet.

Generative AI is a subfield of artificial intelligence; broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP. Generative AI models use machine learning techniques to process and generate data.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

DOWNLOAD: TechRepublic Premiums prompt engineer hiring kit

What is the difference between generative AI and discriminative AI?

Whereas generative AI is used for generating new content by learning from existing data, discriminative AI specializes in classifying or categorizing data into predefined groups or classes.

Discriminative AI works by learning how to tell different types of data apart. Its used for tasks where data needs to be sorted into groups; for example, figuring out if an email is spam, recognizing whats in a picture or diagnosing diseases from medical images. It looks at data it already knows to classify new data correctly.

So, while generative AI is designed to create original content or data, discriminative AI is used for analyzing and sorting it, making each useful for different applications.

Regenerative AI, while less commonly discussed, refers to AI systems that can fix themselves or improve over time without human help. The concept of regenerative AI is centered around building AI systems that can last longer and work more efficiently, potentially even helping the environment by making smarter decisions that result in less waste.

In this way, generative AI and regenerative AI serve different roles: Generative AI for creativity and originality, and regenerative AI for durability and sustainability within AI systems.

It certainly looks as though generative AI will play a huge role in the future. As more businesses embrace digitization and automation, generative AI looks set to play a central role in industries of all types, with many organizations already establishing guidelines for the acceptable use of AI in the workplace. The capabilities of gen AI have already proven valuable in areas such as content creation, software development, medicine, productivity, business transformation and much more. As the technology continues to evolve, gen AIs applications and use cases will only continue to grow.

SEE: Deloittes 2024 Tech Predictions: Gen AI Will Continue to Shape Chips Market

That said, the impact of generative AI on businesses, individuals and society as a whole is contingent on properly addressing and mitigating its risks. Key to this is ensuring AI is used ethically by reducing biases, enhancing transparency and accountability and upholding proper data governance.

None of this will be straightforward. Keeping laws up to date with fast-moving tech is tough but necessary, and finding the right mix of automation and human involvement will be key to democratizing the benefits of generative AI. Recent legislation such as President Bidens Executive Order on AI, Europes AI Act and the U.K.s Artificial Intelligence Bill suggest that governments around the world understand the importance of getting on top of these issues quickly.

See more here:

Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic

Read More..

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

More:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Read More..

GIS-based non-grain cultivated land susceptibility prediction using data mining methods | Scientific Reports – Nature.com

Research flow

The NCL susceptibility prediction study includes four main parts: (1) screening and analysis of the influencing factors of NCL; (2) construction of the NCL susceptibility prediction model; (3) NCL susceptibility prediction; and (4) evaluation of the prediction results. The Research flow is shown in Fig.2.

The NCL locations were obtained based on information of Google Earth interpretation, field survey, and data released by local government, which derived in a total of 184 NCL locations. For determining the non-NCL locations, GIS software was applied, and 184 locations were randomly selected. In order to decreasing the bias of modeling, we generated non-NCL points by 200m distance for NCL. At each point, the data was divided into training samples and testing samples in a ratio of 7/3, thus forming the training dataset and the testing dataset together (Fig.3).

Currently, there is no unified consensus on the factors influencing NCL. Therefore, based on historical research materials and on-site field investigations24,25,26,27,28, 16 appropriate Non-grain Cultivated Land Susceptibility conditioning factors (NCLSCFs) were chosen for modelling NCL susceptibility in accordance with topographical, geological, hydrological, climatological and environmental situations. Alongside this, a systematic literature review has also been performed on NCL modelling to aid in the identification of the most suitable NCLSCFs for this study. The NCLSCF maps were shown in Fig.4.

Typical NCL factors map: (a) Slope; (b) Aspect; (c) Plan curvature; (d) Profile curvature; (e) TWI; (f) SPI; (g) Rainfall; (h) Drainage density; (i) Distance from river; (j) Lithology; (k) Fault density; (l) Distance from fault; (m) Landuse; (n) Soil; (o) Distance from road.

(1) Topographical factors

The occurrences of NCL and their recurrent frequency are very much dependent on topographical factors of an area. Several topographical factors like slope, elevation, curvature, etc. are triggering parameters for the development of NCL activities29. Here, six topographical factors were chosen: altitude, slope, aspect, plan and profile curvature and topographic wetness index (TWI). All these factors also perform a considerable part in NCL development in study area. These factors were prepared using shuttle radar topographical mission (SRTM) sensor digital elevation model (DEM) data with 30m resolution in the ArcGIS software. The output topographical factors of altitude ranges from 895 to 3289m (Fig.3), slope map 0261.61%, aspect map has nine directions (flat, north, northeast, east, southeast, south, southwest, west, northwest), plan curvature12.59 to 13.40, profile curvature13.05 to 12.68 and TWI 4.96 to 24.75. The following equation was applied to compute TWI:

$$TWI = Lnfrac{propto }{mathrm{tanbeta }+mathrm{ C}}$$

(1)

where,specifies flow accumulation, specifies slope and C is the constant value (0.01).

(2) Hydrological factors

Sub-surface hydrology is treated as the activating mechanism for the happening of NCL, as water performs a significant part in the soil moisture content. Therefore, four hydrological factors, namely drainage density, distance from river, stream power index (SPI) and annual rainfall, for modelling NCL susceptibility were chosen30. Here, SRTM DEM data of 30m spatial resolution was used to map the first three hydrological variables. Drainage density and distance from river map was prepared using line density extension and Euclidean extension tool respectively in GIS platform. The following formula was applied to compute SPI.

$$SPI = As*tan beta$$

(2)

where, As specifies the definite catchment area in square meters and specifies the slope angle in degrees. The precipitation map of the area was derived from the statistics of 19 climatological stations around the province with a statistical period of 25years and in accordance with the kriging interpolation method in GIS platform. The output drainage density value ranges from 0 to 1.68km/km2. Meanwhile, the value of distance from river ranges between 0 and 9153.93m, average annual rainfall varies from 175 to 459.98mm and the value of SPI ranges from 0 to 8.44.

(3) Geological factors

The characteristics of rock mass, i.e., lithological characteristics of an area, significantly impact on NCL activities31. Therefore, in NCL susceptibility studies geological factors are indeed commonly used as input parameters to optimize NCL prediction assessment. In the current study, three geological factors (namely lithology, fault density and distance from fault) were chosen. The lithological map and fault lines were obtained in accordance with the geological map of study gathered from local government at a scale of 1:100,000. Fault density and distance from fault factor map was prepared using line density extension and extension tool respectively in GIS platform. In this area, the value of fault density varies from 0 to 0.54km/km2 and distance from fault ranges from 0 to 28,247.1m respectively. The lithological map in this area is presented in Fig.4b.

(4) Environmental factors

Several environmental factors can also be significant triggering factors for NCL occurrence in mountainous or hilly regions32. Here, land use land cover (LULC), soil and distance from road were selected as environmental variables for predicting of NCL susceptibility. The LULC map was obtained in accordance with Landsat OLI 8 satellite images applying the maximum probability algorithm in the ENVI. Soil texture map was prepared based on the soil map of study area. The road map of this area was digitized from the topographical map by the local government. The output LULC factor was classified into six land use classes, while the soil map was classified into eight soil texture groups and the value of distance from road ranges from 0 to 31,248.1m.

As the NCLSCFs are selected artificially and their dimensions, as well as the quantification methods of data, are derived through mathematical operations, as subsequent input data for modeling, there may be potential multicollinearity problems among the NCLSCFs33. Such problems arise due to precise or highly correlated relationships between NCLSCFs, which can lead to model distortion or difficulty in estimation. In light of this, to avoid potential multicollinearity problems, this study examines the variance inflation factor and tolerance index to assess whether there exists multicollinearity among the NCLSCFs.

The MC analysis was conducted among the chosen NCLSCFs to optimize the NCL susceptibility model and its predictions34. TOL and VIF statistical tool were used to test MC using SPSS software. Studies indicate that there is a multicollinearity issue if VIF value is>5 and TOL value is<0.10. TOL and VIF were measured applying the following formula:

$$TOL=1-{R}_{j}^{2}$$

(3)

$$VIF=frac{1}{TOL}$$

(4)

where, R2 represents a regression value of j on other various factors.

This section details the machine learning models of GBM and XGB, as used in NCL susceptibility studies.

In prediction performance analysis, GBM is one of the most popular machine learning methods, more frequently applied by researchers in different fields and treated as a supervised classification technique. A variety of classification and regression issues are also often solved by the GBM method, which was first proposed by Friedman35. This model is based on the ensemble of different weak prediction models such as decision trees, and is therefore considered as one of the most important prediction models. Three components are required in GBM model, namely a loss operate, a weak learner prediction, and an optimization of the loss function in which an additive function is necessary to include weak learners within the model. In addition to the above mentioned components, three important tuning parameters (namely n-tree, tree depth and shrinkage, i.e., the maximum number of trees, highest possible interaction among the independent variables and the learning rate respectively) is also required to build a GBM model36. The advantage of such a model is that it has capacity to determine the loss function and weak learners in a precise way. It is complex to obtain the solution of optimal estimation applying the loss function of (y, f) and weak learner of (x, ). Thus, to solve this problem, a new operate (x, t) was planned to negative gradient {gt(xi)}i=1 along with the observed data:

$${g}_{t}(x) ={{E}_{y} [frac{apsi (y,f(x))}{af(x)}|x]}_{f(x)={f}^{t-1}(x)}$$

(5)

This new operate is greatly associated with(x). This algorithm can permit us to develop aleast square minimization from the method by applying the following equation:

$$(mathrm{rho t},mathrm{ theta t})=mathrm{arg min}sum_{i=1}^{N}{[-{text{gt}}(mathrm{xi }) +mathrm{ rho h}(mathrm{xi },uptheta ]}^{2}$$

(6)

Chen & Guestrin then went on to introduce the XGB algorithm. It indicates the advance machine learning method, and is more efficient than the others37. The algorithm of XGB is based on classification trees and the gradient boosting structure. Gradient boosting framework is used in an XGB model by the function of parallel tree boosting. This algorithm is chiefly applied for boosting the operation of different classification trees. A classification tree is usually made up of various regulations to classify each input factor as the function of prejudice variables in a plot construction. This plot is developed as a individual tree and leaves are appointed with respective scores, which convey and choose the respective factor class, i.e., categorical or ordinal. The loss function is used in the XGB algorithm to train the ensemble model; this is known as regularization, which deals specifically with the severity of complexity trees38. Therefore, this regularization method can significantly enhance the performance of prediction analysis through alleviating any over-fitting problems. The boosting method, with the combination of weak learners, is used in XGB algorithm to optimally predict the result. Three parameters (i.e., General, Task and Booster) are applied to separate XGB models. The weighted averages of several tree models are then combined to form the output result in XGB. The following optimization function was applied to form the XGBoost model:

$$OF(theta ) =sum_{i=1}^{n}lleft({{text{y}}}_{i}, {overline{y} }_{i}right)+sum_{k=1}^{k}upomega ({f}_{K})$$

(7)

where, (sum_{i=1}^{n}lleft({{text{y}}}_{i}, {overline{y} }_{i}right)) is the optimization loss function of training dataset, (sum_{k=1}^{k}upomega ({f}_{K})) is the regularization of the over-fitting phenomenon, K indicates the number of individual trees, fk is the ensemble of trees, and ({overline{y} }_{i}) and ({{text{y}}}_{i}) indicates the actual and predicted output variables respectively.

Kennedy, an American social psychologist, developed the PSO algorithm based on the vector depending of seeking food by birds and their eating behavior39. It is a meta-heuristic-based simulation of a social model, often applied in behavioral studies of fish schooling, birds and swarming theory. The non-linear problems in our day-to-day research study will be solved by applying this PSO method. The PSO algorithm has been widely applied to determine the greatest achievable direction or direction to collect food, specifically for bird and fish intelligence. Here, birds are treated as particles, and they always search for an optimal result to the issue. In this model, bird is considered an individual, and the swarm is treated as a group like other evolutionary algorithms. The particles always try to locate the best possible solution for a respective problem using n-dimensional space, where n indicates the respective problems several parameters40. PSO consists of two fundamental principles: position and speed. This is the basic principle for the movement of each particle.

Hence, xt=(xt, xt,, xt) and vt=(vt, vt, , vt) is the position and speed for the changing particle position which is designed for ith particle in tth iteration. The given formula are used for the ith particle position and speed in (t+1)th iteration.

Where, xt is the previous ith position; pt is the most excellent position; gt is the best position; r1 and r2 indicates the random numbers within 0 and 1; is weights of inertia; c1 is coefficient and c2 is the social coefficient. Several type of methods are presented to weight the assignment of respective particles. Among them, standard 2011 PSO is the most popular and has been widely used among previous researchers. Here, standard 2011 PSO was used to calculate particles weight assignment using the following formula:

$$omega =frac{1}{2ln2}and {c}_{1}={c}_{2}=0.5+ln2$$

(8)

Evaluation is an important action to quantify the accuracy of each output method. In other words, the superiority of the output model is specified through a validation assessment41. Studies indicate that several statistical techniques can be applied to evaluate the accuracy of the algorithms; among them, the most frequently used technique is receiver operating characteristics-area under curve (ROC-AUC). Here, statistical techniques of sensitivity (SST), specificity (SPF), positive predictive value (PPV), negative predictive value (NPV) and ROC- AUC were all applied to validate and assess the accuracy of the models. These statistical techniques were computed in accordance with the four indices, i.e., true positive (TP), true negative (TN), false positive (FP) and false negative (FN)42. In this, correctly and incorrectly identified NCL susceptibility zones are represented through TP and FP, and correctly and incorrectly identified non-NCL susceptibility zones are represented through TN and FN respectively. The ROC is mostly used as a standard process to evaluate the accuracy of the methods. It is based on even and non-even phenomena. The output result of these techniques is such that a higher value represents good performance by the model, and a lower value represents poor performance. Applied statistical techniques of this study were measured through the following formula:

$${text{SST}}=frac{{text{TP}}}{mathrm{TP }+mathrm{ FN}}$$

(9)

$${text{SPF}}=frac{{text{TN}}}{mathrm{FP }+mathrm{ TN}}$$

(10)

$${text{PPV}}=frac{{text{TP}}}{mathrm{FP }+mathrm{ TP}}$$

(11)

$${text{NPV}}=frac{{text{TP}}}{mathrm{TP }+mathrm{ FN}}$$

(12)

$$AUC=frac{mathrm{Sigma TP }+mathrm{ Sigma TN}}{mathrm{P }+mathrm{ N}}$$

(13)

See more here:

GIS-based non-grain cultivated land susceptibility prediction using data mining methods | Scientific Reports - Nature.com

Read More..

Data mining the archives | Opinion – Chemistry World

History including the history of science has a narrative tradition. Even if the historians research has involved a dive into archival material such as demographic statistics or political budgets to find quantitative support for a thesis, the stories it tells are best expressed in words, not graphs. Typically, any mathematics it requires would hardly tax an able school student.

But there are some aspects of history that only a sophisticated analysis of quantitative data can reveal. That was made clear in a 2019 study by researchers in Leipzig, Germany,1 who used the Reaxys database of chemical compounds to analyse the growth in the number of substances documented in scientific journals between 1800 and 2015. They found that this number has grown exponentially, with an annual rate of 4.4% on average.

And by inspecting the products made, the researchers identified three regimes, which they call proto-organic (before 1861), organic (1861 to 1980) and organometallic (from 1981). Each of these periods is characterised by a change a progressive decrease in the variability or volatility of the annual figures.

Theres more that can be gleaned from those data, but the key points are twofold. First, while the conclusions might seem retrospectively consistent with what one might expect, only precise quantification, not anecdotal inspection of the literature, could reveal them. It is almost as if all the advances in both theory (the emergence of structural theory and of the quantum description of the chemical bond, say) and in techniques dont matter so much in the end to what chemists make, or at least to their productivity in making. (Perhaps unsurprisingly, the two world wars mattered more to that, albeit transiently.)

Such a measure speaks to the unusual ontological stability of chemistry

Second, chemistry might be uniquely favoured among the sciences for this sort of quantitative study. It is hard to imagine any comparable index to gauge the progress of physics or biology. The expansion of known chemical space is arguably a crude measure of what it is that chemists do and know, but it surely counts for something. And as Guillermo Restrepo, one of the 2019 studys authors and an organiser of a recent meeting at the Max Planck Institute for Mathematics in the Sciences in Leipzig on quantitative approaches to the history of chemistry, says, the existence of such a measure speaks to the unusual ontological stability of chemistry: since John Daltons atomic theory at the start of the 19th century, it has been consistently predicated on the idea that chemical compounds are combinations of atomic elemental constituents.

Still, there are other ways to mine historical evidence for quantitative insights into the history of science often now aided by AI techniques. Matteo Valleriani of the Max Planck Institute for the History of Science in Berlin, Germany, and his colleagues have used such methods to compare the texts of printed Renaissance books that used parts of the treatise on astronomy by the 13th century scholar Johannes de Sacrobosco. The study elucidated how relationships between publishers, and the sheer mechanics of the printing process (where old plates might be reused for convenience), influenced the spread and the nature of scientific knowledge in this period.

And by using computer-assisted linguistic analysis of texts in the Philosophical Transactions of the Royal Society in the 18th and 19th centuries, Stefania Degaetano-Ortlieb of Saarland University in Germany and colleagues have identified the impact of Antoine Lavoisiers new chemical terminology from around the 1790s. This amounts to more than seeing new words appear in the lexicon: the statistics of word frequencies and placings disclose the evolving norms and expectations of the scientific community. At the other end of the historical trajectory, an analysis of the recent chemical literature by Marisol Bermdez-Montaa of Tecnolgico de Monterrey in Mexico reveals the dramatic hegemony of China in the study of rare-earth chemistry since around 2003.

All this work depends on accessibility of archival data, and it was a common refrain at the meeting that this cant be taken for granted. As historian of science Jeffrey Johnson of Villanova University in Pennsylvania, US, pointed out at the meeting, there is a private chemical space explored by companies who keep their results (including negative findings) proprietary. And researchers studying the history of Russian and Soviet chemistry have, for obvious geopolitical reasons, had to shift their efforts elsewhere and for who knows how long?

But even seemingly minor changes to archives might matter to historians: Robin Hendry of Durham University in the UK mentioned how the university librarys understandable decision to throw out paper copies of old journals that are available online obliterates tell-tale clues for historians of which pages were well-thumbed. The recent cyberattacks on the British Library remind us of the vulnerability of digitised records. We cant take it for granted that the digital age will have the longevity or the information content of the paper age.

Originally posted here:

Data mining the archives | Opinion - Chemistry World

Read More..

Top 14 Data Mining Tools You Need to Know in 2024 and Why – Simplilearn

Driven by the proliferation of internet-connected sensors and devices, the world today is producing data at a dramatic pace, like never before. While one part of the globe is sleeping, the other part is beginning its day with Skype meetings, web searches, online shopping, and social media interactions. This literally means that data generation, on a global scale, is a never-ceasing process.

A report published by cloud software company DOMO on the amount of data that the virtual world generates per minute will shock any person. According to DOMO's study, each minute, the Internet population posts 511,200 tweets, watches 4,500,000 YouTube videos, creates 277,777 Instagram stories, sends 4,800,000 gifs, takes 9,772 Uber rides, makes 231,840 Skype calls, and transfers more than 162,037 payments via mobile payment app, Venmo.

With such massive volumes of digital data being captured every minute, most forward-looking organizations are keen to leverage advanced methodologies to extract critical insights from data, which facilitates better-informed decisions that boost profits. This is where data mining tools and technologies come into play.

Data mining involves a range of methods and approaches to analyze large sets of data to extract business insights. Data mining starts soon after the collection of data in data warehouses, and it covers everything from the cleansing of data to creating a visualization of the discoveries gained from the data.

Also known as "Knowledge Discovery," data mining typically refers to in-depth analysis of vast datasets that exist in varied emerging domains, such as Artificial Intelligence, Big Data, and Machine Learning. The process searches for trends, patterns, associations, and anomalies in data that enable enterprises to streamline operations, augment customer experiences, predict the future, and create more value.

The key stages involved in data mining include:

Data scientists employ a variety of data mining tools and techniques for different types of data mining tasks, such as cleaning, organizing, structuring, analyzing, and visualizing data. Here's a list of both paid and open-source data mining tools you should know about in 2024.

One of the best open-source data mining tools on the market, Apache Mahout, developed by the Apache Foundation, primarily focuses on collaborative filtering, clustering, and classification of data. Written in the object-oriented, class-based programming language JAVA, Apache Mahout incorporates useful JAVA libraries that help data professionals perform diverse mathematical operations, including statistics and linear algebra.

The top features of Apache Mahout are:

Dundas BI is one of the most comprehensive data mining tools used to generate quick insights and facilitate rapid integrations. The high-caliber data mining software leverages relational data mining methods, and it places more emphasis on developing clearly-defined data structures that simplify the processing, analysis, and reporting of data.

Key features of Dundas BI include:

Teradata, also known as the Teradata Database, is a top-rated data mining tool that features an enterprise-grade data warehouse for seamless data management and data mining. The market-leading data mining software, which can differentiate between "cold" and "hot" data, is predominately used to get insights into business-critical data related to customer preferences, product positioning, and sales.

The main attributes of Teradata are:

The SAS Data Mining Tool is a software application developed by the Statistical Analysis System (SAS) Institute for high-level data mining, analysis, and data management. Ideal for text mining and optimization, the widely-adopted tool can mine data, manage data, and do statistical analysis to provide users with accurate insights that facilitate timely and informed decision-making.

Some of the core features of the SAS Data Mining Tool include:

The SPSS Modeler software suite was originally owned by SPSS Inc. but was later acquired by the International Business Machines Corporation (IBM). The SPSS software, which is now an IBM product, allows users to use data mining algorithms to develop predictive models without any programming. The popular data mining tool is available in two flavors - IBM SPSS Modeler Professional and IBM SPSS Modeler Premium, incorporating additional features for entity analytics and text analytics.

The primary features of IBM SPSS Modeler are:

One of the most well-known open-source data mining tools written in JAVA, DataMelt integrates a state-of-the-art visualization and computational platform that makes data mining easy. The all-in-one DataMelt tool, integrating robust mathematical and scientific libraries, is mainly used for statistical analysis and data visualization in domains dealing with massive data volumes, such as financial markets.

The most prominent DataMelt features include:

A GUI-based, open-source data mining tool, Rattle leverages the R programming language's powerful statistical computing abilities to deliver valuable, actionable insights. With Rattle's built-in code tab, users can create duplicate code for GUI activities, review it, and extend the log code without any restrictions.

Key features of the Rattle data mining tool include:

One of the most-trusted data mining tools on the market, Oracle's data mining platform, powered by the Oracle database, provides data analysts with top-notch algorithms for specialized analytics, data classification, prediction, and regression, enabling them to uncover insightful data patterns that help make better market predictions, detect fraud, and identify cross-selling opportunities.

The main strengths of Oracle's data mining tool are:

Fit for both small and large enterprises, Sisense allows data analysts to combine data from multiple sources to develop a repository. The first-rate data mining tool incorporates widgets as well as drag and drop features, which streamline the process of refining and analyzing data. Users can select different widgets to quickly generate reports in a variety of formats, including line charts, bar graphs, and pie charts.

Highlights of the Sisense data mining tool are:

RapidMiner stands out as a robust and flexible data science platform, offering a unified space for data preparation, machine learning, deep learning, text mining, and predictive analytics. Catering to both technical experts and novices, it features a user-friendly visual interface that simplifies the creation of analytical processes, eliminating the need for in-depth programming skills.

Key features of RapidMiner include:

KNIME (Konstanz Information Miner) is an open-source data analytics, reporting, and integration platform allowing users to create data flows visually, selectively execute some or all analysis steps, and inspect the results through interactive views and models. KNIME is particularly noted for its ability to incorporate various components for machine learning and data mining through its modular data pipelining concept.

Key features include:

Orange is a comprehensive toolkit for data visualization, machine learning, and data mining, available as open-source software. It showcases a user-friendly visual programming interface that facilitates quick, exploratory, and qualitative data analysis along with dynamic data visualization. Tailored to be user-friendly for beginners while robust enough for experts, Orange democratizes data analysis, making it more accessible to everyone.

Key features of Orange include:

H2O is a scalable, open-source platform for machine learning and predictive analytics designed to operate in memory and across distributed systems. It enables the construction of machine learning models on vast datasets, along with straightforward deployment of those models within an enterprise setting. While H2O's foundational codebase is Java, it offers accessibility through APIs in Python, R, and Scala, catering to various developers and data scientists.

Key features include:

Zoho Analytics offers a user-friendly BI and data analytics platform that empowers you to craft visually stunning data visualizations and comprehensive dashboards quickly. Tailored for businesses big and small, it simplifies the process of data analysis, allowing users to effortlessly generate reports and dashboards.

Key features include:

The demand for data professionals who know how to mine data is on the rise. On the one hand, there is an abundance of job opportunities and, on the other, a severe talent shortage. To make the most of this situation, gain the right skills, and get certified by an industry-recognized institution like Simplilearn.

Simplilearn, the leading online bootcamp and certification course provider, has partnered with Caltech and IBM to bring you the Post Graduate Program In Data Science, designed to transform you into a data scientist in just twelve months.

Ranked number-one by the Economic Times, Simplilearn's Data Science Program covers in great detail the most in-demand skills related to data mining and data analytics, such as machine learning algorithms, data visualization, NLP concepts, Tableau, R, and Python, via interactive learning models, hands-on training, and industry projects.

Read the original here:

Top 14 Data Mining Tools You Need to Know in 2024 and Why - Simplilearn

Read More..

These Are the Best Antivirus Apps for Macs in 2024 – Lifehacker

There are certainly fewer viruses around targeting Macspartly because it makes more sense for bad actors to target Windows, which has a significantly bigger user basebut macOS is certainly not immune to viruses. Don't think that just because you own an Apple computer, you don't have to worry about malware.

Your Mac comes with some impressive security features built right in, including XProtect and Gatekeeper, but there's no harm in installing extra protection for extra peace of mindthe right antivirus tool is only going to improve your Mac's defenses, and some of the best anti-malware software developers out there offer packages for macOS.

What's more, they often come with extras besides the virus-fighting capabilities, including web tracker blocking and junk file removal. Here we've picked out our current favorites, weighing up everything from the ease-of-use of the interface to the range of features.

Malwarebytes for Mac offers a clean, straightforward interface. Credit: Lifehacker

Malwarebytes is an antivirus tech veteran, and its Malwarebytes for Mac software comes with a variety of useful features: An at-a-glance look at your computer's current safety status, basic VPN features to improve the privacy of your web browsing, and quick and easy manual scans that run a comprehensive audit of all the files on your system.

Okay, it's not the most feature-packed security tool out there, but it does the basics (like scheduled scanning) very well, and couldn't be any easier to use. The basic Malwarebytes for Mac scanner is free, while the Premium version (from $6.67 a month after a 14-day trial) offers round-the-clock protection and the additional VPN shield for connecting to the web.

Intego Mac Internet Security X9 gives you a comprehensive set of features. Credit: Lifehacker

Few companies take Mac security as seriously as Intego does, and it makes several antivirus packages available for macOS, including Intego Mac Internet Security X9: It'll protect against viruses and other network attacks, and comes with protections against fraudulent websites and email threats too, all wrapped up in an intuitive interface.

A lot of what Intego Mac Internet Security X9 does to keep your computer safe happens automatically without much input from you, including malware definition updates, but you can run scans manually. You'll have to pay from $49.99 per year to use the software on your system, but you can try it out free of charge for 14 days to see if you like it first.

Bitdefender Antivirus for Mac includes some useful extras. Credit: Lifehacker

Bitdefender is another of the long-serving security software brands that you can trust, and it offers a variety of solutions to protect your Mac. Bitdefender Antivirus for Mac is the cheapest of those solutions, which will set you back $59.99 per year after the 30-day trial has expired (though at the time of writing, you do get a discount on your first year).

In return for that cash you get real time protection against viruses and ransomware, you get the blocking and removal of adware on the web, you get a basic VPN service, and you get additional tools for staying safe while shopping and banking online. Everything is handled in a smart interface that keeps you right up to date with your security status.

AVG Antivirus Free is a simple and free solution. Credit: Lifehacker

If you're in the market for a free and lightweight antivirus tool for macOS, then AVG Antivirus Free fits the bill: It's not particularly advanced (hence the free bit), but it can do a comprehensive virus scan of your system for you, and if you need extra protection and features then there are premium options too (starting at $59.88 for the first year).

Everything is straightforward to use, from the smart scan that you can launch manually, to the file shield feature that interrogates every new file that gets added to your system to make sure it's safe to use. You also get an impressive level of customization, considering this is a free piece of software, so you can turn off features you don't think you need.

Avast Free Antivirus is one of the more advanced free options. Credit: Lifehacker

Another free antivirus tool for macOS that's worthy of your consideration is Avast Free Antivirusand as with the AVG package, more advanced programs are available if you're prepared to pay (from $49.99 for the first year). It's a little more advanced than the AVG option above, but they're pretty similar (AVG and Avast are run by the same company).

The extra options you get here versus the AVG package include a network scanner, and a traffic monitor for measuring the data usage of your appsso you can tell if an app is using up more bandwidth than it really should. The smart scan is straightforward to use, and you'll also get advice about potential security vulnerabilities before they're exploited.

Excerpt from:
These Are the Best Antivirus Apps for Macs in 2024 - Lifehacker

Read More..