Page 2,655«..1020..2,6542,6552,6562,657..2,6602,670..»

Quobyte: AI supports sustainable logging – saving owls’ trees from the axe Blocks and Files – Blocks and Files

Case Study. Northern Spotted Owls, a federally protected species, live in the Oregon forests forests where loggers want to cut down trees and harvest the timber. The areas where the owls live are protected from the loggers, but which areas are they? Data can show the way, enabling sustainable timber harvesting.

Recorded wild life sounds, such as birdsong, are stored with Quobyte file, block and object software and interpreted using AI. This data shows where the owl habitats are located

Forests of Douglas Fir, Ponderosa Pine, Juniper and other mixed conifers cover over30.5 million acres of Oregon almost half of the state. Finding out where the owls live is quite a task. The Center for Quantitative Life Sciences (CQLS) at Oregon State University, working with the USDA Forestry Service, has deployed and is tracking 1500 autonomous sound and video recording units deployed in the forests, gathering real-time data. The aim is to to monitor the behaviour of wildlife living in the forests of Oregon to ensure that the logging industrys impact is managed.

The CQLS creates around 250 terabytes of audio and video data a month from the recording units, and maintains around 18PB of data at any given time. It keeps taking data off and reusing the space to avoid buyinginfinite storage.

Over an 18-month period it devised an algorithm to parse the audio recordings and identify different animal species. The algorithm creates spectrograms from the audio, and processes those spectrograms through a convolutional neural net based on the video. It can identify about thirty separate species, distinguish male from female, and even spot behavioural changes within a species over time.

The compute work takes place on an HPC setup comprising IBM Power System AC922 servers, collectively containing more than 6000 processors across 20 racks in two server rooms that serve 2500 users. The AC922 architecture puts AI-optimised GPU resources directly on the northbridge bus, much closer to the CPU than conventional server architectures.

CQLS needed a file system and storage solution able to keep massive datasets close to compute resources as swapping data in and out from external scratch resources doubled processing time.

At first it was looking at public cloud storage options, but the costs associated were considered outrageously expensive.

CQLS checked a variety of storage alternatives and settled on Quobyte running on COTS hardware, rejecting more expensive storage appliance alternatives which could need expensive support arrangements.

The sizes of individual files vary from tiny to very large and everything in between. The Quobyte software is good when dealing with large files, as opposed to millions of highly similar small files. This is advantageous when working on AI training, where TIFF files can range from 20 to 200GB in size.

Concurrently, those files may need to be correlated with data from sensors, secondary cameras, microphones, and other instruments. Everything must flow through one server, which puts massive loading on compute and storage.

Quobytes software uses four Supermicro servers with two Intel Xeon E5-2637 v4 CPUs @ 3.50GHz and 256G RAM (DDR4 2400). There are LSI SAS3616 12Gbit/s SAS controllerd running two 78-disk JBODs. These are filled with Toshiba MG07ACA14TA 14TB, SATA-6Gbit/s, 7200rpm, 3.5-inch disk drives.

The entire HPC system is Linux-based and everything is mountedthrough the Quobyte client for x86-based machines and NFS for the PPC64LE (AC922) servers.

Many groups of users access the system. A single group could have millions or hundreds of files based on the work they do. Most groups leverage over 50TB each and currently there is 2.6PB loaded on the Quobyte setup.

Christopher Sullivan, Assistant Director for Biocomputing at CQLS, said; We have all kinds of pathways for data to come into the systems. First off all research buildings at OSU are connected at a minimum of 40Gbit/sec network and our building and incoming feed to the CGRB (Center for Genome Research and Biocomputing) is 100Gbit/sec and a 200Gbit/sec network like between OSU and HMSC (Hatfield Marine Science Center) at the coast.

To start some of the machines in our core lab facility (not the sequencers) do drop live data onto the system through SAMBA or NFS-mounted pathways. Next, we have users moving data onto the servers via a front-end machine, again providing services like SAMBA and SSH with a 40Gbit/sec network connection for data movement.

This allows for users to have machines around the university move data automatically or by hand onto the systems. For example, we have other research labs moving data from machines or data collected in greenhouses and other sources. The data line to the coast mentioned above is used to move data onto the Quobyte for the plankton group as another example.

What about backup?

Sullivan said: Backup is something we need on a limited basis since we can generally generate the data again cheaper than the costs of paying for backing up that large amount of space. Most groups backup the scripts and final output (ten per cent) of the space they use for work. Some groups take the original data and if needed by grants keep the original data on cold drives on shelves in a building a quarter-mile away from the primary. So again we do not need a ton of live backup space.

Sullivan said: We found that using public clouds was too expensive since we are not able to get the types of hardware in spot instances and data costs are crazy expensive. Finally, researchers cannot always tell what is going to happen with their work or how long it needs to run, etc.

This makes the cloud very costly and on-premises very cost-effective. My groups buy big machines (256 thread count with 2TB RAM and 12x GPUs) that last 67 years and can do anything they need. That would be paid for five times over in that same time frame in the cloud for that hardware. Finally, the file space is too expensive over the long haul, and hard to move large amounts of data on and off. We have the Quobyte to reduce our overall file space costs.

This is a complicated and sizeable HPC setup which does more than safeguard the Northern Spotted Owls arboreal habitats. That work is done in an ingenious way one that doesnt involve human bird-spotters trekking through the forests looking for the creatures.

Instead, AI algorithms analyse and interpret recorded bird songs, recognise those made by the owls and then log where and when the owls are located. That data can be used to safeguard areas of forests from the loggers, who can fell timber sustainably, cutting down owl-free timber.

The rest is here:
Quobyte: AI supports sustainable logging - saving owls' trees from the axe Blocks and Files - Blocks and Files

Read More..

Power users of Microsoft OneDrive suffer massive inconvenience: Read-only files – The Register

Microsoft is still completing a fix for an issue with its OneDrive cloud storage that "affects a large subset of users worldwide, who have a storage quota that exceeds 1TB," in which files become read-only.

The problem, incident OD280960, was first reported on August 26th, and the company's engineers soon worked out that some misconfigured process was "not recognizing user licenses and reverting the storage quota limit to the default settings of 1TB. We're changing the way the quota is calculated, which should mitigate the issue." The workaround, Microsoft said in its status update, was that "admins can individually set the quota for impacted users."

All was not well though, and 12 hours later Microsoft reported that "we've determined that the previously provided workaround is not viable or functioning as expected for all affected users and have removed that guidance from this message. We apologize for any inconvenience or confusion this may have caused."

A fixed fix was identified and a few hours later, the company was confident that "the deployment has completed successfully. Additionally, weve identified that the fix will take approximately 24 hours to take full effect."

Two days later though, on the 28th, Microsoft said that "we have received some reports that this issue is not resolved for users with custom quotas. Further investigation is required." The workaround that was earlier rejected was again recommended, that "admins attempt to manually set the quota for individual users."

The word "attempt" possibly signaled some doubt about how well this would work. "A more robust solution" is in the works, the company said. An additional apology was added to the status update. "We understand how impacting this issue has on your organization and we want to assure you that we are treating the issue with the utmost priority," it said.

Later that day another update referred to a "separate mitigation activity that will temporarily increase the quota to a value greater than 1TB, and then subsequently apply the correct value." Users were also advised that they should "initiate a refresh activity," such as logging into OneDrive on the web.

On the 30th, Microsoft said it was "in the process of completing our final validations within our internal environments prior to initiating a targeted release." Then yesterday, "Weve completed the validation process and are deploying our solution for users with applied custom quotas."

Another update is expected soon and the hope is that all will now be well. It is fair to say though that resolving this "misconfiguration" has proved trickier than was originally thought.

The good news, perhaps, is that only a minority of users have storage exceeding 1TB in their OneDrive. For those with memories, for example, of 1.44MB floppy disks, it still seems a large amount of space, though easy enough to fill for the determined power user.

Link:
Power users of Microsoft OneDrive suffer massive inconvenience: Read-only files - The Register

Read More..

Wasabi Technologies Now on Carahsoft NASPO, SEWP V, and Additional Government Contracts – PRNewswire

Organizations across the Public and Private sectors are experiencing a boom in data production as a result of pandemic-driven digital transformation. Wasabi's low-cost, high-performing and reliable cloud storage helps organizations manage this deluge of data without overwhelming IT budgets. Wasabi provides organizations with the flexibility they need to continue to innovate and meet the demands of the modern workplace. Wasabi features 100% data immutability protection plus object-level immutability for the highest level of security, strong identity and multi-factor authentication, and compliance with the latest privacy and security standards.

Through its partnership with Carahsoft, Wasabi is now included on the following contracts and purchasing agreements:

"Reliably managing and storing data is top of mind for Government agencies, particularly as the sector continues to adjust to growing data storage needs amid hybrid work environments and a spike in ransomware attacks that are literally dismantling operations," said Wasabi CEO & Co-Founder David Friend. "That's where Wasabi comes in. Our dedication to high-performing, cost-effective cloud storage, as well as to meeting the latest privacy and security standards make us an attractive option for Government agencies looking to migrate to the cloud. Carahsoft has a proven track record of delivering services to meet their customer needs, and they have become a huge asset for us as we continue to forge strong Government partnerships."

"Wasabi's flexibility and predictable pricing model make them an ideal candidate for agencies looking for cost-efficient, high-quality storage that is as safe and secure as possible with high-performance and reliability," said Joe Tabatabaian, Sales Manager for Wasabi at Carahsoft. "By adding Wasabi to more contract vehicles, Carahsoft and our reseller partners are better able to serve Government and Educational institutions who turn to Carahsoft for a data storage solution."

For more information, contact the Wasabi team at Carahsoft at (703) 889-9723 or [emailprotected].

About CarahsoftCarahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, we deliver solutions for Cybersecurity, MultiCloud, DevSecOps, Big Data, Artificial Intelligence, Open Source, Customer Experience and Engagement, and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Visit us at http://www.carahsoft.com.

Contact:Mary Lange703-230-7434[emailprotected]

About Wasabi TechnologiesWasabi provides simple, predictable and affordable hot cloud storage for businesses all over the world. It enables organizations to store and instantly access an unlimited amount of data at 1/5th the price of the competition with no complex tiers or unpredictable egress fees. Trusted by tens of thousands of customers worldwide, Wasabi has been recognized as one of technology's fastest-growing and most visionary companies. Created by Carbonite co-founders and cloud storage pioneers David Friend and Jeff Flowers, Wasabi has secured nearly $275 million in funding to date and is a privately held company based in Boston.

Follow and connect with Wasabi onTwitter,Facebook,Instagram and ourblog.

Wasabi PR contactKaley CarpenterInkhouse for Wasabi[emailprotected]

SOURCE Wasabi Technologies

http://www.wasabi.com

Visit link:
Wasabi Technologies Now on Carahsoft NASPO, SEWP V, and Additional Government Contracts - PRNewswire

Read More..

VAST picks up another $10M-plus customer: a US car-maker Blocks and Files – Blocks and Files

High-end all-flash file storage startup VAST Data has picked up a second $10 million-plus sale in the same month it announced $10 million in US DoD orders, and a month after two customers ordered $20 million of its products.

In contrast, Dell Technologies has announced a slowdown in its overall storage array business. We understand VAST competes with Dells Isilon (PowerScale), ECS, and Data Domain (PowerProtect) products.

Jeff Denworth, co-founder and CMO at VAST, provided an announcement quote: Were thrilled to be able to deliver on a storage experience that is as smooth as a Sunday drive. With VAST, customers no longer need to worry about slow-performing datasets so much so that we have made cloud storage infrastructure an afterthought for them.

The latest customer is a US-headquartered major auto manufacturer its name has not been disclosed. It is using VAST Universal Storage kit to store automotive design data, next-generation intelligent vehicle datasets, enterprise backup, application workloads and containerised big data applications.

The VAST storage is being deployed by this customer in every new core and edge data center globally. VAST says the systems will achieve radical savings over what it terms legacy all-flash systems. As all-flash systems only started becoming mainstream in the post-2010 timeframe, this is a bit rich.

Vehicle manufacturers are collecting more and more data from vehicles, and will be processing more data inside vehicles as they become more intelligent. The car business has become a data-intensive industry and we can start thinking of smart cars as edge data centers on wheels.

Who could this customer be? Were thinking it could be any one of Ford, GM or Tesla.

See original here:
VAST picks up another $10M-plus customer: a US car-maker Blocks and Files - Blocks and Files

Read More..

Western Digital unveils 20TB OptiNAND hard drive, pledges 50TB to follow – The Register

Western Digital has announced a "breakthrough in storage that works differently," in the form of a new architecture combining traditional platters with solid-state flash: OptiNAND.

Adding flash to traditional mechanical hard drives is not a new concept. Western Digital announced its first work on the concept back in 2011 after being beaten to market by rival Seagate's Momentus XT, a year prior. In both cases, the solid-state flash acted as a temporary buffer for the most commonly accessed data - attempting to blend the best of both storage worlds.

OptiNAND, though, is positioned differently. Rather than simply improving throughput and access time for the user's most commonly examined data, an OptiNAND-enabled drive is claimed to offer increased overall capacity, improved performance across the whole disk, and a fiftyfold increase in the amount of data retained if you accidentally pull the power in the middle of a write.

The heart of the system, beyond the spinning platters themselves, is a Universal Flash Storage (UFS)-standard Embedded Flash Drive (EFD) dubbed iNAND, developed at Western Digital subsidiary SanDisk. Rather than acting as a simple cache, the iNAND disk handles metadata on write positions and volumes - as part of a refresh system meant to avoid adjacent track interference, where a frequently written track will begin to influence data stored on the tracks around it.

"It used to be, not that many generations ago, that you could write 10,000 times before needing to refresh sectors on either side," Western Digital engineering fellow David Hall explained in an interview for the company blog. "And then as we pushed the tracks closer and closer together, it went to 100, then 50, then 10, and now for some sectors, it's as low as six."

Traditional refresh systems rely on tracking these metadata in DRAM, but the accuracy is limited. Tracking the same metadata in more capacious iNAND, or so Western Digital has claimed, improves the accuracy - and allows its engineers to boost the areal density of the mechanical portions of the drive, packing more storage into the same number of platters.

The first OptiNAND drives, sampling to "select customers" now, combine the technology with energy-assisted perpendicular magnetic recording (ePMR) to offer 20TB of storage across nine platters - 2.2TB per platter, the company's highest areal density yet.

At the same time OptiNAND is claimed to offer boosted performance by reducing the number of track interference refreshes required as well as the number of write-cache flushes. The latter also ties in to claims of improved reliability, with WD having claimed that an OptiNAND drive can retain "nearly 50x more customer data" in the event of an unexpected shutdown.

"With our IP and world-class development teams in HDD and flash, we are able to continuously push the boundaries of innovation to improve our customers' storage infrastructure," boasted Siva Sivaram, president of global technology and strategy at Western Digital.

"We have had an extraordinary journey of HDD innovation. We changed everything with HelioSeal in 2013; were first to ship energy-assisted HDDs in volume in 2019; and now were going to lead again with OptiNAND technology. This architecture will underpin our HDD technology roadmap for multiple generations as we expect that an ePMR HDD with OptiNAND will reach 50TB in the second half of the decade."

Western Digital had not responded to a query regarding pricing and commercial availability of the 20TB OptiNAND drive at the time of publication.

Originally posted here:
Western Digital unveils 20TB OptiNAND hard drive, pledges 50TB to follow - The Register

Read More..

Follow these Strategies to Win a Machine Learning Hackathon – Analytics Insight

The tech sphere brings exciting things for tech-savvy people literally every day. Whether you are a beginner or an expert, you can still participate in many tech-related competitions, conferences, events, and seminars to shape your skills. One such effective event to test your talent is the machine learning hackathon. Today, machine learning is emerging as a powerful technology behind every digital mechanism. Many people are aspiring to become well-versed in machine learning. However, they can enhance their practical skills by participating in machine learning hackathons. Machine learning hackathons are conducted specifically for programmers, coders, and others involved in the development of software. Other professionals like interface designers, project managers, domain experts, graphic designers, and others who work closely on software development will also try their hand at the competition. During a hackathon, participants will be asked to create a working software with the datasets and models provided within a limited time. Fortunately, at the end of every machine learning hackathon, participants will have amazing learning. However, winning a hackathon is completely different from participating in one for the experience. If you are planning to be the star of the event, then you should follow certain strategies to win a machine learning hackathon.

If you are new to machine learning and want to give it a try in a hackathon, then short or online hackathons should be your first choice. Remember victory comes with experience. Therefore, directly jumping into long hackathons wont secure you the winner title. Start from small which are below 24-hours and then move on to long hackathons. But make sure you are well-organized and prepared when you shift from short to long competitions.

As a beginner, you should try to enter a hackathon with someone who is experienced and has great knowledge about machine learning. This will help you both learn throughout the process and win the prize if possible. Besides, make sure you join hands with somebody who can contradict your basic knowledge. For example, if you are a developer, join a league with a person who has business knowledge. With the combination of business and developing skills, you can surely bank the first place.

Diverse here doesnt mean ideology but talent. If all the members of the team are developers and have zero knowledge about other perspectives, then the result will completely be a slider on one side. Therefore, you should ensure that everybody has something new and different from others to offer in your team. Also, try to go with clients who can support you throughout the challenge. Ask them to constantly check your development and give directions if necessary.

Success in a hackathon comes in two different ways. One is winning the competition and the other is impressing the client. Therefore, before beginning the work, sort out which one you are going to concentrate on. If you are planning to impress the judges and win the hackathon, then you should go with shiny software and an outstanding presentation. But it is quite opposite to impress the client. You should come up with useful software that can be utilized even after the competition is over if you planning to impress your sponsor.

Data is the core of software development along with the coding. Therefore, always make sure you are preparing the data well before starting core operations. But scraping data is time-consuming and can be tricky when it comes to dynamically generated content. Instead, try to go with publicly available data provided by IMDb or maybe the Kaggle dataset. One thing to keep in mind is that many winning teams usually save their time by seeking readily available data.

Share This ArticleDo the sharing thingy

Read more:
Follow these Strategies to Win a Machine Learning Hackathon - Analytics Insight

Read More..

Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators? – BBN Times

Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators?

Artificial intelligence (AI) acceleratorsare computer systems designed to enhance artificial intelligenceandmachine learningapplications, includingartificial neural networks (ANNs)andmachine vision.

Most AI accelerators are just simple data matrix-multiplication accelerators. All the rest is commercial propaganda.

The main aim of this article is to understand the complexity of machine learning (ML) and deep learning (DL) processors and discover the truth about the so-called AI accelerators.

Unlike other computational devices that treat scalar or vectors as primitives, Googles Tensor Process Unit (TPU) ASIC treats matrices as primitives,The TPU is designed to perform matrix multiplication at a massive scale.

At its core, you find something that is inspired by the heart and not the brain. Its called a Systolic Array described in 1982 in Why Systolic Architectures"?

And this computational device contains 256 x 256 8bit multiply-add computational units. A grand total of 65,536 processors is capable of 92 trillion operations per second.

It uses DDR3 with only 30GB/s to memory. Contrast that to the Nvidia Titan X with GDDR5X hitting transfer speeds of 480GB/s.

Whatever, it has nothing to do with real AI hardware.

A central main processor is commonly defined as a digital circuit which performs operations on some external data source, usually memory or some other data stream, taking the form of a microprocessor implemented on a single metaloxidesemiconductor integrated circuit chip (MOSFET).

It could be supplemented with a coprocessor, performing floating point arithmetic, graphics, signal processing, string processing, cryptography, or I/O interfacing with peripheral devices. Some application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors.

A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.

There are a lot of processing units, as listed below:

A Graphical Processing Unit (GPU) enables you to run high-definition graphics on your computer. GPU has hundreds of cores aligned in a particular way forming a single hardware unit. It has thousands of concurrent hardware threads, utilized for data-parallel and computationally intensive portions of an algorithm. Data-parallel algorithms are well suited for such devices because the hardware can be classified as SIMT (Single Instruction Multiple Threads). GPUs outperform CPUs in terms of GFLOPS.

The TPU and NPU go under a Narrow/Weak AI/ML/DL accelerator class of specialized hardware accelerator or computer system designed to accelerate special AI/ML applications, including artificial neural networks and machine vision.

Big-Tech companies such as Google, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.

Typical applications include algorithms for training and inference in computing devices, such as self-driving cars, machine vision, NLP, robotics, internet of things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability, with a typical NAI integrated circuit chip containing billions of MOSFET transistors.

Focus on training and inference of deep neural networks, Tensorflow uses a symbolic math library based on dataflow and differentiable programming

The former uses automatic differentiation (AD), algorithmic differentiation, computational differentiation, or auto-diff, and gradient-based optimization, working by constructing a graph containing the control flow and data structures in the program.

Again, the datastream/dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing data flow principles and architecture.

Things revolve around static or dynamic graphs, requesting the proper programming languages, such as C++, Python, R, or Julia, and ML libraries, such as TensorFlow or PyTorch.

What AI computing is still missing is a Causal Processing Unit, involving symmetrical causal data graphs, with the Causal Engine software simulating real-world phenomena in digital reality.

It is highly likely embedded in the human brain and Real-World AI.

See original here:
Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators? - BBN Times

Read More..

Jrgen Schmidhuber Appointed as Director of AI Initiative at KAUST – HPCwire

THUWAL, Saudi Arabia, Sept. 2, 2021 King Abdullah University of Science and Technology (KAUST) announces the appointment of Professor Jrgen Schmidhuber as director of the Universitys Artificial Intelligence Initiative. Schmidhuber is a renowned computer scientist who is most noted for his pioneering work in the field of artificial intelligence, deep learning, and artificial neural networks. He will join KAUST on October 1, 2021.

Professor Schmidhuber earned his Ph.D. in Computer Science from the Technical University of Munich (TUM). He is a Co-Founder and the Chief Scientist of the company NNAISENSE and was most recently Scientific Director at the Swiss AI Lab, IDSIA, and Professor of AI at the University of Lugano. He is also the recipient of numerous awards, author of over 350 peer-reviewed papers, a frequent keynote speaker and an adviser to various governments on AI strategies.

His labs deep learning neural networks have revolutionized machine learning and AI. By the mid-2010s, they were implemented on over 3 billion devices and used billions of times per day by customers of the worlds most valuable public companies products, e.g., for greatly improved speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion translations per day), Apples Siri and Quicktype on all iPhones, the answers of Amazons Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets with superhuman performance. In 2012, they had the first deep neural network to win a medical imaging contest (on cancer detection), attracting enormous interest from the industry. His research group also established the fields of artificial curiosity through generative adversarial neural networks, linear transformers and networks that learn to program other networks (since 1991), mathematically rigorous universal AI and recursive self-improvement in meta-learning machines that learn to learn (since 1987).

Professor Schmidhuber will join KAUSTs already prominent AI faculty, recruit new faculty members and top student prospects from the Kingdom of Saudi Arabia and around the world, develop educational programs and entrepreneurial activities, and engage with key public and private sector organizations both within the Kingdom and globally. Researchers he joins include KAUSTs new Provost, Lawrence Carin, a leading expert in artificial intelligence and machine learning, the Deputy Director of the AI Initiative, Bernard Ghanem, the founding Interim Director Wolfgang Heidrich, and many other highly cited faculty in Computer Science, Applied Mathematics, Statistics, the Biological Sciences, and the Earth Sciences.

AI and machine learning are becoming established as core methodologies throughout science and engineering, as they have been in commercial and social spheres. KAUST expects AI to aid in the analysis of huge amounts of data coming from the Kingdoms gigaprojects and the design of data gathering in these domains and its own laboratories and fields. AI is also expected to play a big role in the Kingdoms energy transitions in hydrogen, carbon capture, solar, and wind. In both basic research and incorporation into its daily operations, KAUST is aligned with the vision of the Kingdom for a digital transformation powered by artificial intelligence.

KAUST President Tony Chan, himself a highly cited computer scientist and applied mathematician, expressed, I am delighted that we are able to recruit to KAUST a seminal leader in AI and machine learning in Dr. Schmidhuber. This signifies the commitment that KAUST, as well as Saudi Arabia, is making to lead in and contribute to this very important field.

About KAUST

King Abdullah University of Science and Technology (KAUST) advances science and technology through distinctive and collaborative research integrated with graduate education. Located on the Red Sea coast in Saudi Arabia, KAUST conducts curiosity-driven and goal-oriented research to address global challenges related to food, water, energy, and the environment.

Established in 2009, KAUST is a catalyst for innovation, economic development and social prosperity in Saudi Arabia and the world. The University currently educates and trains masters and doctoral students, supported by an academic community of faculty members, postdoctoral fellows and research scientists.

With over 100 nationalities working and living at KAUST, the University brings together people and ideas from all over the world. To learn more visit kaust.edu.sa.

Source: King Abdullah University of Science and Technology

View original post here:
Jrgen Schmidhuber Appointed as Director of AI Initiative at KAUST - HPCwire

Read More..

What is Ann Coulters net worth?… – The US Sun

ANN Coulter is known as an American conservative media pundit, best-selling author, syndicated columnist, and lawyer.

Known as the "queen of conservative controversy," Coulter, 59, made headlines after she praised President Joe Biden.

2

Outside of making headlines, Coulter lives a comfortable life thanks to her massive net worth.

According to Celebrity Net Worth, Coulter has an estimated worth of $10 million thanks to her writing career.

Throughout her career, she has written 17 books, including:

Over the years, Coulter has also appeared in three films, including, "Feeding the Beast", "FahrenHYPE 9/11",and "Is It True What They Say About Ann?"

President Biden has been the face of a lot of controversy following his decision to pull troops out of Afghanistan in August 2021.

13 U.S. solider's ended up losing their lives after ISIS-K launched a terrorist attack at the Kabul airport.

On August 31, 2021, Coulter quoted a tweet on Twitter from The New York Times about President Biden defending his decision and said: Thank you, President Biden, for keeping a promise Trump made, but then abandoned when he got to office.

Coulter then followed that up with another tweet that said, Trump REPEATEDLY demanded that we bring our soldiers home, but only President Biden had the balls to do it, adding, Here are a few of Trumps wuss, B.S.-I mean 'masterful' tweets, along with a screenshot of contradictory statements on the conflict Donald Trump had once said.

Following Coulter's unexpected praise, many took to Twitter to voice their confusion on something they never thought they would see.

"Who had Ann Coulter supporting Biden on their apocalypse bingo card?" one user joked while another said, "Guys. I retweeted ann coulter. I know I said that would be the signal that I'd been abducted but I assure you I'm fine."

"I wasn't aware we needed Ann Coulter to validate Biden on Afghanistan, but here we are," another added.

2

Over the years, Coulter has been engaged several times but has never officially tied the knot.

Previously, Coulter was known for her relationships withSpin founder and publisher Bob Guccione Jr. and conservative writer Dinesh D'Souza but both have since ended.

Along with not being married, Coulter also does not have any kids.

See the original post here:
What is Ann Coulters net worth?... - The US Sun

Read More..

Altcoins 101: Definition, Explanations, Examples – Business Insider

Since the emergence of Bitcoin, the concept of a decentralized, trustless peer-to-peer (P2P) payment network has inspired an entire class of digital assets. The crypto markets are a product of Bitcoin's success, and the fast-growing space consists of more than 9,000 altcoins.

Now we have altcoins, which began to emerge in 2011 in an attempt to reinvent Bitcoin, with their own rules and improvements on different features.

Altcoin is a cryptocurrency alternative to Bitcoin its name is a portmanteau of "alternative" and "coin." Since Bitcoin is widely regarded as the first of its kind, new cryptocurrencies developed after are viewed as alternative coins or altcoins. The emergence of altcoins began around 2011, with the first generation formed using the same blockchain engine as Bitcoin.

The first altcoin was Namecoin, which is based on Bitcoin's code and was released in April 2011. Namecoin is integral to the history of altcoins in that it showed that there's enough room in the crypto markets for more than one kind of coin.

Blockchains today can run several hundreds of "altcoins," fueling similar currency projects with unique rules and mechanisms. Altcoins like Ethereum can provide developers with a toolkit and programming language to build decentralized applications into the blockchain.

To understand how altcoins work, it's good to first understand how blockchain technology works which is where all cryptocurrencies operate.

The blockchain network is a distributed ledger that stores data like cryptocurrency transactions, NFT ownership, and decentralized finance (DeFi) smart contracts. This ledger is often referred to as a "chain" comprising "blocks" of data, which are used to verify new data before additional blocks can be added to the ledger.

This network, on which Bitcoin operates, is groundbreaking because it's a decentralized, trustless, P2P payment network that functions without a central authority or entity facilitating transactions. And altcoins function on the exact same premise as Bitcoin: to operate using this blockchain technology.

However, there have been some altcoins that have emerged to instead improve on the flaws of Bitcoin or to achieve some other goal. For example, Litecoin was designed by former Google engineer Charlie Lee as a "lite version of Bitcoin."

A fork refers to an update in network protocol (the open source software blockchains run on). There are two kinds of forks: a hard fork and soft fork. A soft fork is a minor upgrade to the software, and typically means nothing for users. A hard fork is a major change to the network, and requires users/miners to update to the latest software in order to continue mining. If developers decide they do not like the direction a blockchain network is going in, they can do a hard fork and create a new coin. Since 2009, Bitcoin has seen over 400 hard forks.

In the news: In the past year, crypto and celebrity influencers have come under fire for promoting cryptocurrencies. Even social media platforms like TikTok have banned crypto promoters from the platform.

Here are the two key things to know about altcoins.

Altcoins are a highly speculative and volatile investment. Speculation is a powerful driver of the crypto markets so it's important to do your research before investing in any altcoin. Half-baked whims and trading based on rumors are exactly what the experts advise against.

"The altcoin space is full of innovation and change. There are some interesting projects, and always many new projects. You've got to be very well informed and somewhat cautious," says Shone Anstey, CEO of LQwD. "Before plunking down hard-earned money, you need to do the research. Who is the team behind it, especially on the engineering side? What problem are they solving? And who are the financial backers?"

The decentralized, intangible, and often misunderstood nature of cryptocurrencies in general makes predicting the long-term, steady success of an altcoin project difficult to predict. Some altcoins, like Ethereum, have maintained their position in the market through constant innovation and the strength of their community. Speculation has a more dramatic effect on newer altcoins. External factors like public perception, Bitcoin price fluctuation, or a meme on Reddit can oftentimes cause drastic price fluctuations.

While the crypto community stands united on its long-term bullish outlook for Bitcoin, the temptation of selling coins for short-term profits is built into the crypto zeitgeist. The crypto community created the term "hodl" in an effort to encourage people to hold on to their crypto assets for the long-term. "Hodl" means "hold on for dear life," and to resist the impulse of selling when the value of their crypto drops or rises.

Quick tip: Smart contracts are programs that are stored on blockchain that execute when certain conditions are met.

Cryptocurrency takes a toll on the environment. Bitcoin's energy consumption is a well-known flaw. As of August 2021, Bitcoin's energy consumption is 151.57 TWh according to Digiconomist's Bitcoin Energy Consumption Index that's comparable to what the entirety of Malaysia uses in energy.

The culprit for the tremendous costs of energy lies with the "proof of work" (PoW) consensus algorithm, which is how transactions are verified. And as Bitcoin mining has become more competitive, the computing power required to profitably mine new bitcoins is represented in factories loaded with servers all working toward solving the network's algorithms.

The PoW consensus mechanism is responsible for driving the competition for faster and more powerful computational processing power. The faster a miner's computer can complete the formula, the higher their odds of winning a block reward. Over time, miners have developed computer hardware with the sole function of processing the PoW consensus algorithm.

This has evolved from a miner running a program in the background of their PC to entire mining farms. Miners (or a pool of miners) will buy factories in countries where electricity is cheap and fill them with thousands of mining rigs. The energy required to keep the rigs running 24/7, combined with the fans and coolant systems to prevent overheating and fires, has made crypto mining an environmental disaster.

Bitcoin's carbon footprint has provided an opportunity for altcoins with greener consensus mechanisms to market themselves as "green coins." While proof of work is the main culprit for the Bitcoin energy crisis, blockchains like Polkadot (DOT) and Cardano (ADA) operate on proof of stake consensus mechanisms. Compared to the energy-hungry PoW, staking requires no mining in order to participate and earn coins. The success of Polkadot and Cardano prove that people can participate in crypto while being environmentally friendly.

Quick tip: Proof of work is the consensus mechanism used by Bitcoin and many other altcoins to audit transactions on the blockchain and "mine" new crypto. Crypto mining is solving computational formulas to audit transactions on the blockchain. Completing the formula means a chance at receiving a newly minted BTC reward.

Over time, there have been many altcoins that have come along. And now, there are the main types:

Quick tip: One of the main benefits of blockchain technology is transparency. If something is on the blockchain, it means it is visible, permanent, and accessible to the public. On-chain typically refers to a transaction that is performed and recorded on the blockchain. Off-chain is a transaction not directly recorded on the blockchain.

Staking is the passive-investing strategy where an investor holds funds in a cryptocurrency wallet in order to earn rewards over time. When an investor chooses to stake their holdings, the network can use it to forge new blocks on the blockchain. The process of staking supports the process of PoS work because it requires participants to support it. And so stakers are essentially helping to make this happen.

Also, staking is incredibly energy-efficient unlike mining. According to the Ethereum Foundation, the switch to a PoS system will reduce energy costs by 99.95%.

While no altcoin has managed to "dethrone" Bitcoin in value, many projects have proved themselves worthy enough to a global community of investors and developers:

The second-largest blockchain in crypto, Ethereum's evolution has taken it from an asset to an application. Founded by Vitalik Buterin in 2013, Ethereum is a distributed blockchain platform for smart contracts and dApps (decentralized applications). With its native token, ether (ETH), users can interact with the Ethereum platform. Ether can be traded on most crypto exchanges, used to pay transaction fees, or as collateral for ERC-20 tokens, which have DeFi utility.

Ethereum's integration with smart contracts via the Solidity programming language has distinguished the project from Bitcoin. A smart contract is a self-executing code that can run on the blockchain.

Launched officially in 2019 on the Ethereum blockchain, Chainlink is a decentralized oracle network that's meant to expand on smart contracts. In a nutshell, it connects smart contracts with "off-chain" data and services. The network is built around the LINK network and token and has two parts: on-chain and off-chain.

The on-chain component comprises oracle contracts on the Ethereum blockchain, which oversee and process data requests that come in from users. The off-chain component is made up of off-chain oracle nodes that connect to the Ethereum network, which are responsible for processing external requests that are later converted to contracts.

The AAVE is an open-source DeFi lending protocol that allows anyone to loan or borrow crypto without an intermediary. As a lender, you can deposit funds which are allocated into a smart contract where you can earn interest based on how Aave is performing in the market. Making a deposit means you can also borrow by using your deposit as collateral.

Rebranded from ETHLend following a successful ICO in 2017, Aave switched from a decentralized P2P lending platform into a liquidity pool model. This means loans are acquired from a pool instead of an individual lender. Since 2020, the Aave Protocol has been an open-source and non-custodial liquidity DeFi protocol for earning interest on deposits and borrowing assets. Holders of AAVE can decide on the direction of the project by voting on and discussing proposals.

Stellar is an open-source payment network that doubles as a distributed intermediary blockchain for global financial systems, designed so all the world's financial systems can work together on a single network. Stellar began in 2014 when Ripple co-founder Jed McCaleb disagreed with the direction of the Ripple project. The ethos behind Stellar's development is to make international money transfers possible for the everyday person.

While Stellar is an open-source network for currencies and payments, Stellar Lumens (XLM) is the circulating native asset on the network. Stellar keeps its ledger in sync using its Stellar Consensus Protocol (SCP). Instead of relying on a miner network, SCP uses the Federated Byzantine Agreement algorithm, enabling faster transactions.

Uniswap is a decentralized exchange ecosystem built on the Ethereum blockchain. Launched in 2018, Uniswap uses an on-chain automated market maker. One of Uniswap's unique features is that anyone can be a market maker by depositing their assets into a pool and earning fees based on trading activity.

Uniswap uses an automated market maker protocol that executes trades according to a series of smart contracts. The smart contracts automate price discovery, allowing users to swap one token for another without an intermediary. In traditional finance, market makers are usually brokerage houses with incentives that can cause a conflict of interest.

PotCoin is a Canadian-based digital currency that was launched in 2014 to allow consumers to buy and sell legal cannabis products. PotCoin was introduced as a solution for cannabis enthusiasts and the industry looking to legally transact at a time where banks were unable to do so.

PotCoin is an open source cryptocurrency forked from the Litecoin core. There are subtle changes to the PotCoin protocol including a shorter block generation time and the increased 420 million max supply of PotCoins. Potcoin switched from a Proof of Work mechanism to Proof of Stake in 2016 to make supporting the network more accessible and less harmful to the environment.

One of the first-generation of altcoins made in 2011, Litecoin is a cryptocurrency based off of Bitcoin. Key things that distinguish Litecoin from Bitcoin include blocktime (four times faster block times than Bitcoin), supply (Litecoin has a max supply of 84 million while Bitcoin max supply of 21 million), its hashing algorithm, and distribution.

Dubbed the "digital silver" to Bitcoin's "digital gold," Litecoin's goal was to optimize the Litecoin asset while preserving the best parts of Bitcoin.

Quick tip: The ERC-20 standard is a set of rules applied to smart-contract tokens on the Ethereum blockchain. The flexibility and fungibility of the ERC-20 token allows dApp developers to create utility tokens, security tokens, or stablecoins.

Altcoins have come a long way since 2011, and continue to prove themselves as more than just an "alternative to Bitcoin." The crypto space is a fast-moving and increasingly popular point of interest for investors. Thanks to the innovation and integration of crypto into mainstream business, people can safely and legally buy altcoins on their phone or computer.

Easy access to the crypto markets doesn't mean it isn't risky. Before investing in an altcoin, ask yourself: have you researched and performed enough due diligence? Would you be able to explain the project to your family or friends at the dinner table? Whether you want to trade altcoins full-time or just "hodl" onto your Bitcoin, the choice is yours. Listening to the experts, evaluating the risks, and assessing your financial goals are keys to investing responsibly.

View post:
Altcoins 101: Definition, Explanations, Examples - Business Insider

Read More..