Page 3,999«..1020..3,9983,9994,0004,001..4,0104,020..»

Elon Musk Fact-Checked His Own Wikipedia Page and Requested Edits Including the Fact He Does ‘Zero Investing’ – Entrepreneur

Elon Musk has been perusing his Wikipedia page and suggesting edits.

December23, 20192 min read

Tesla CEO Elon Musk has been spending the run-up to Christmas checking his Wikipedia page.

"Just looked at my wiki for 1st time in years. It's insane!" Musk tweeted on Sunday,saying the page was"a war zone with a zillion edits."

The tech billionaire also took issue with some of the language in the article. "Can someone please delete 'investor.' I do basically zero investing," he wrote.

"If Tesla & SpaceX go bankrupt, so will I. As it should be,"he added, implying that most of his wealth consisted of his stock in the two companies. He made a similar argument during his legal fight with the UK cave diver Vernon Unsworth,during which he said he was cash-poor.

A Twitter user asked Musk whether Tesla counted as an investment, to which he replied that he'd rolled the proceeds of all his companies forward into one another. "These are all companies where I played fundamental founding role. Not right to ask others to put in money if I don't put in mine,"he said.

Related:19 TimesElon MuskHad the Best Response

Another Twitter user suggested that the term investor could be replaced with "business magnet," to whichMusk replied"Yes" followed by a laughing emoji and a heart emoji. Musk has previously joked he would like to be known as a business magnet, as opposed to a business magnate.

At 8:11 p.m. on Sunday an edit was made to Musk's Wikipedia page that replaced investor with business magnet, "as requested by Elon Musk," according to the page's history. "Business magnet" has since been erased, but the "investor" edit was left unchanged.

Musk has been known to invest in companies: He was an early investor in the artificial-intelligence research startup DeepMind before it was bought by Google's parent company, Alphabet, in 2014. Musktold Vanity Fairin 2017 that he invested in DeepMind to keep an eye on the progress of AI rather than for financial return.

Excerpt from:

Elon Musk Fact-Checked His Own Wikipedia Page and Requested Edits Including the Fact He Does 'Zero Investing' - Entrepreneur

Read More..

The 9 Best Blobs of 2019 – Livescience.com

When scientists discover a round, lumpy object that they can't totally explain, they have a special name for it: A blob.

Blobs come in all shapes and sizes. Some are as small as cells, others as big as galaxies. Some blobs live underwater, others deep in space or far below Earth's crust. Every blob is a good blob, but some blobs are great blobs. As 2019 draws to a close, wobble along with us as we recall the nine best blobs of the year. (Arranged from smallest to largest.)

Fun fact: All life begins in blob form. You did, your mother did and this adorable baby salamander did. While your own personal blobbiness is probably only recorded in a blurry ultrasound photo, certain amphibians lay transparent eggs, making their earliest stages of development visible to anyone with a microscope. In February 2019, photographer Jan van IJken shared this incredible time-lapse video of one such amphibian (an alpine newt) transforming from a single cell into a living, breathing tadpole.

The whole video is stunning, but the highlight may come at about the three-minute mark. That's when, after dividing from one cell into millions, the amphibious blob finally folds in on itself and begins to take on a familiar fetal shape. By the end of the video, a baby salamander hatches and swims away. Godspeed, young blob!

Jellyfish may be the most famous blobs in nature, and for good reason with more than 2,000 species around the world, these unmistakably amorphous animals are easy to find near pretty much any coast on Earth.

This year, one jellyfish encounter earned top blob marks for us. In July, a pair of divers in England came face-to-faceless-head with a hulking barrel jellyfish (Rhizostoma pulmo) a rarely seen species that can grow about as large as a full-grown human. (Luckily, they caught the encounter on video).

That wasn't the only human-size blob divers bumped into this year. There was also the gelatinous sac researchers found while investigating a sunken ship near Norway. That sac, as large as the divers themselves, was transparent and encased a strange yellow object. Upon inspection with a flashlight, the divers saw that the object appeared to be a clump of squid ink, and it was surrounded by hundreds of thousands of itty-bitty squid eggs.

The team determined that the sac belonged to a species of 10-armed cephalopod called the southern shortfin squid (Illex coindetii), which can lay about 200,000 eggs at a time in sacs like this one. In case the phrase "squiddy eggy blob" doesn't quite do it for you, the researchers also gave the sac a special name: "blekksprutgeleball," meaning "squid gel ball" in Norwegian.

In this year's blob news most likely to get you in trouble with HR, thousands of wiggly, 10-inch-long "penis fish" washed up on a California beach in early December.

In reality, these sausage-shaped castaways are not fish (or penises) at all, but a species of North American marine worm known as the "fat innkeeper worm." Their name comes from their penchant for building U-shaped burrows in the sand, which other tiny beach creatures like to sneak into in order to steal whatever food the innkeeper worm happens to throw away. How did thousands of these unfortunately-named, unfortunately-shaped blobs end up strewn across the beach? A storm likely tore up all their burrows and left the worms destitute. Keep that in mind the next time you have a bad day: At least you are not a homeless penis fish.

About halfway between your feet and the center of Earth, two continent-size mountains of hot, compressed rock pierce the gut of the planet. Technically, these mysterious hunks of rock are called "large low-shear-velocity provinces" (LLSVPs), because seismic waves always slow down when passing through them. But most scientists call them simply "the blobs."

In March, Eos (the official news site of the American Geophysical Union) shared an awesome 3D animation showing the most-detailed view of the blobs ever. The blobs begin thousands of miles below Earth's surface, where the planet's rocky lower mantle meets the molten outer core. One blob lurks deep below the Pacific Ocean, the other beneath Africa and parts of the Atlantic. Both of them stand about 100 times taller than Mount Everest and are as large as continents. Despite their massive scale, scientists don't really have any idea what the blobs are or why they're there. Could they impact volcanic activity? Maybe. They're too deep to study directly so, for now, these blobs must remain shrouded in mystery.

Not to be totally outshined by its neighbor, the moon also revealed a mysterious subterranean blob this year, too.

In April, NASA scientists discovered what they're calling an "anomaly" of heavy metal hidden deep below the moon's South Pole-Aitken basin (the largest preserved impact crater anywhere in the solar system). A gravitational analysis suggests the metal blob lives hundreds of miles below the moon's surface, weighs about 2.4 quadrillion U.S. tons (2.18 quintillion kilograms) and is about five times larger than the Big Island of Hawaii. The anomaly appears to be weighing down the South Pole-Aitken crater by more than half a mile, and may be altering the moon's gravitational field.

The sun's corona constantly breathes wispy strings of hot solar wind into space but, once in a while, those breaths become full-blown burps. According to a study in the February issue of the journal JGR: Space Physics, every few hours the plasma underlying solar wind grows significantly hotter, becomes noticeably denser, and pops out of the sun in rapid-fire orbs capable of engulfing entire planets for minutes or hours at a time. Officially, these solar burps are called periodic density structures, but astronomers have nicknamed them "the blobs," due to their lava-lamp-blob-like appearance.

These blobs are hundreds of times larger than Earth and can potentially pack twice as many charged particles as the average solar wind. Astronomers think they're related to solar storms (explosions of magnetic field activity on the sun's surface), but their true origin and function remains as unclear as the water in your lava lamp.

In 1987, a star in the Milky Way's nearest satellite galaxy erupted in a supernova explosion, leaving a cloud of colorful cosmic debris in its place. Behind that debris should be a neutron star (an ultradense stellar corpse) but astronomers have been unable to find one for the last 32 years. Now, in a study published in November, researchers think they've found that missing neutron star hiding in a "blob" of brighter-than-average radiation at the cloud's core. If verified, this discovery will not only solve a decades-old mystery, but will also confirm that the only thing better than a blob is a blob with a prize inside.

In a galaxy of blobs, two bubbles reign supreme: The Fermi Bubbles.

The Fermi Bubbles are twin blobs of high-energy gas ballooning out of both poles of the Milky Way's center, stretching into space for 25,000 light-years apiece (roughly the same as the distance between Earth and the center of the Milky Way). The bubbles are thought to be a few million years old, and likely have something to do with a giant explosion from our galaxy's central black hole but observations are scarce, as they are typically only visible to ultra-powerful gamma-ray and X-ray telescopes. This September, however, astronomers writing in the journal Nature detected the bubbles in radio waves for the first time, revealing large quantities of energetic gas moving through the bubbles, possibly fueling them to grow even larger.

Will the Milky Way's biggest blobs get even bigger? Stay tuned in 2020 to find out.

Originally published on Live Science.

More here:

The 9 Best Blobs of 2019 - Livescience.com

Read More..

The ultimate guitar tuning guide: expand your mind with these advanced tuning techniques – Guitar World

How many of us really feel that our tuning habits are as good as they should be? Our ultimate tuning guide introduces the impatient meditation, a fresh, multi-pronged approach to tuning, designed to enhance your cognitive focus, hand health, ear strength, and fretboard awareness as well as maximizing tonal accuracy. Electronic tuners are great, but we shouldnt have to rely on technology to save us.

Usually, Im skeptical of anything that promises to transform your playing with these three secrets, or offers five tricks to become a fretboard super ninja overnight, and so forth. Playing guitar is an infinitely complex endeavor, with countless interlocking variables and a vast repertoire that would take even the most talented musician many lifetimes to master.

In the end, there are few shortcuts. But adopting some effective tuning methods might just qualify as one genuinely transformative, near-instant innovation.

It will literally enhance your control over every single note, and lies at the heart of broader guitaristic mastery. So this lesson is aimed at beginners and advanced guitarists alike - even Hendrix struggled with his tuning sometimes.

We all fall into lazy habits, allowing the compulsion to jam right now to override our better judgement. This results in much undesired dissonance, both literal and cognitive - the imperfections nag away at us, interrupting our flow, sapping our focus, and misbalancing the music.

We fret about whether to squeeze in some frantic peg-nudging, but even this is no guarantee of improvement (adjusting on the fly is a fine art).

Its all right for pianists, who have a whole category of musical employment devoted to their every tuning need. Us guitarists have to do things ourselves... and we still endure the stereotype of doing them lazily (How can you tell when a guitarist is out of tune? His hand starts moving).

While we should definitely hit back with jibes of our own (Why are so many guitarist jokes one-liners? So the rest of the band can understand them), we must also become masters of the tuning process, a complex, conceptually rich area of enquiry that connects together many aspects of musical perception.

Yes, electronic tuners are great, but we shouldnt have to rely on gadgets to save us when it comes to something so basic. In any case, tuning is a fascinating area of enquiry, connecting together many aspects of musical perception and providing consistent spark to the creative imagination. Gaining ear confidence will filter through to our whole sound while deepening our appreciation for music in general.

I actually think were the lucky ones. Pianists, like most other instrumentalists, dont have to pick all this apart so much, and theyre missing out. Its consistently fascinating, and the ideas found here can give anyone creative inspiration.

The whole tuning process can even be re-conceptualized as a ritualistic act of mental, musical, and manual preparation. Or just a time to chill out before you play. Either way, its a lot more than just winding some pegs...

The impatient meditation isnt an exact tuning method - its a way of approaching tuning. We aim to maximize flexibility, efficiency, and tonal precision by running through four concise ideas, which, taken together, allow us to balance the quirks of the guitar with the demands of the music at hand.

Were probably impatient, so heres the tl;dr version (all tabs below):

Get an overview: Slowly strum through the open strings and 12fr natural harmonics, taking a deep breath and focusing in on the sound to get a rough idea of where things are at.

5str fret-matching: Get your A as in as is required, and tune the open strings to notes along it (2fr/5fr/7fr/10fr). Then, tune a selection of fretted As on the other strings back to the open A.

Quick checks: Sample from three other methods to shore things up - melodic fret-matches, natural harmonics, and chordal checks. Try out key passages from your own music too.

Musical focus: Strum the 12fr harmonics, and take another deep breath. Relax your mind, acknowledging any nerves, and then calmly orient your full attention towards the music at hand.

If you familiarize yourself with the strengths, weaknesses, and incongruities of each individual step, you will quickly build an intuition for when and how to deploy them.

Ideally, youll end up narrowing down, honing in on the most concise, pleasing phrase combinations for your own instrument and incorporating them into your playing routines.

A pristine, top-end Strat will be a different beast to the rickety nylon-string you found behind your friends couch, and we should be able to adapt to any axe we come across

Or, for that matter, any other instrument. A pristine, top-end Strat will be a different beast to the rickety nylon-string you found behind your friends couch, and we should be able to adapt to any axe we come across.

Using a tuner wont help you compensate for the latters intonation issues, and besides, some songs on the former will sound better when left a little deliberately messy. The ideas below, when combined, help you know what you want, and should allow you to near-optimally tune virtually any guitar.

We can utilize tuning time to foster broader musical improvement too. Since the process will always be part of our playing routines, I figured we may as well also use it to enhance areas such as ear strength, cognitive focus, fretboard awareness, and manual dexterity. Its also a fantastic way to build up core conceptual understanding around the physics of string vibration and the nature of aural perception.

All Ive really done here is sample the best of a few tuning methodologies already in use, combining their strongest elements with a few minor adaptations and additions.

So it isnt really my creation; at least no more than I could say invented my own style of cuisine yesterday by throwing together the tastiest things I found in the fridge with a dash of seasoning. Ive incorporated feedback from friends, students, and the wider musical world too.

Theres an extended 10,000 word version of this article on my website, going in deep on the strengths and weaknesses of existing tuning techniques and explaining how this approach seeks to build on them.

We analyze James Taylors microtonal stretched tuning and trace the design of guitars fretboard to the epic mathematical treatises of Chinas Ming Dynasty, while also learning from ancient Vedic musicology and 21st-century theoretical physics. Tunings fascinations are in its interconnectedness.

This is more about the how than the why. For the extra-curious, the full breakdown is on my website, with detailed musical and technical discussion. And I cant stress enough - this is an approach to tuning rather than an exact method. Learn from it, pick out what you like, and stay flexible.

At first glance, four steps may seem like overkill (lets be honest, youre probably wondering if you can be bothered to internalize them all). But the combination is designed to foster efficient, intuitive self-learning, which always saves you time in the long run (...and often the fairly short run too).

Anyway, using all four each time isnt the best approach - once youve tried everything on the menu, youll know quickly what you want next time.

Youll be surprised at how fast you can speed things up without sacrificing on accuracy. Running through the checks can become second-nature, and, unless things are a complete mess, tuning may only require a few seconds.

It just tends to be taught badly (or barely taught at all) - theres a whole lot of tuning-themed nonsense on the internet. Sitting down to learn things properly will permanently give us both a broader and finer control over our music, while also making everyone around us sound better.

- Get an overview: First, play the open strings and the 12fr harmonics in slow sequence, getting a rough feel for where things are at. Consider the music at hand, and also what imperfections the guitar itself may have. Take a deep breath, and really zoom in on the texture of the sound. (Unless it really sounds awful... in which case just get on with the next steps).

- Concert or relative? Decide whether you want to tune to exact concert pitch or not. If you do, match your A string to an external reference tone (maybe download the clip below to your phone). If you dont, just make sure the A sounds and feels about right, and matches any other instruments in the room.

- Fret-matching: Roughly fret-match the other five open strings to notes on the A, and then flip things round, matching the open A to fretted tones on the other strings. Pick evenly, and if youre playing through an amp, use a clean, mid-boosted tone. While you can of course check all the As against the reference tone, we should seek to develop the ear too.

Always use the under-tug-up method - i.e. go lower than the target, tug the string around to remove slack, then raise the pitch. Pull it in all directions, being firm but avoiding sudden movements. Ensure the A and D strings sound particularly happy with each other. And if you have a whammy bar, be sure to shake out any string-stick.

Pros and cons

+ Minimizes error compounding (they dont carry over between strings)+ Quick to run through, and gives strong, clear volumes+ Gives you an concise overview of the guitars intonation quirks Misleading if reference string is corroded, damaged, set too high, etc

Now, we use a mix of quick checks to shore everything up. We can sample from several different methods, including melodic fret-matching, natural harmonics, and octave-heavy chord shapes - some of which also work as hand stretches. Find which chords and phrases suit your guitar best:

- Melodic check phrases - like an enhanced 5th fret matching'

Pros and cons

+ More interlinked than the classic fret-matching approach+ Avoids the familiar tuning cliche with quasi-melodic movements+ Opens up your general awareness of when open strings can be used Somewhat harder to play than the classic fret-match method Phrases may never settle with each other on badly-intoned guitars

- Natural harmonics checks - avoiding the deviant 7th fret

Pros and cons

+ Beating N.H. resonances bring out overtone detail clearly+ We avoid the 7fr harmonic, which is actually slightly sharp+ Sweeps at the end are great when you know the right sound Quieter, more complex: takes your ear a while to zoom in Can fail to highlight nuanced intonation issues

- Octave-heavy chord checks - beyond just open Emaj

Pros and cons

+ Places the frequencies in a more musical context+ Can add in key chords from your upcoming pieces+ Usual major shapes arent ideal due to temperament issues+ Increases your familiarity with high neck positions+ Some of the shapes function as hand stretches too (e.g. 07950) Complex for the ears, which can mislead us in many ways Can get chaotic on guitars with shaky intonation

- Necessary imperfection: Notice how each check method produces subtly different results? e.g. high-fretted notes may sound sharp, or the G and B strings might never quite seem to settle with each other across different chords.

This is to be expected - no instrument can ever be tuned perfectly. As we will see, factors like inharmonicity, build flaw, and temperament deviancy mean that theres no such thing as a perfect tuning.

And in any case, lots of guitar music can sound better with a little mess and crunch, ranging from Delta blues and 12-string folk to free improv and plenty of classic Hendrix.

Lap slide players use all variety of microtonal tweaks, and Tommy Emmanuel sometimes likes to detune his G string slightly a trick also used by his blues forebears. Above all its about finding a sound that works for you (and the audience).

- Adding emotive context: Take another deep breath, and briefly call to mind the sentiments you want to get across with the music. Think about the most important passages in your first piece.

Strum through each chord or phrase slowly and evenly, considering their immediate effects on you. Are undesired frequencies dampening the emotional power? If so, try to isolate and correct them.

The best way to balance the imperfections is to focus on the physical locations of the music. e.g. If youre mainly playing low down the neck, make sure tuning here takes precedence over hyper-accuracy in higher positions. You may have to find compromises, especially on stiff-action guitars. Keep adjusting until youre happy the audience will ultimately be grateful for it.

- Gathering yourself: Once youre satisfied with your sound, take a third and final deep breath, and rake firmly upwards through the 12fr natural harmonics. Take both your hands away from the strings, and empty your mind for a few seconds as you exhale.

Again, try out different meditative methods to see what works - you can hum a chord tone, silently count to eight, or even tense and relax your whole body in time to the rhythms of your first piece. (Never forget one of the key lessons from guitar history: people dont care how weird you look as long as you sound good.)

The uniqueness of each individual situation means there are always countless interlocking considerations. Each guitar is different, with varying imperfections to be investigated, taxonomized, and balanced, and each performance brings disparate musical, physical, and social demands. In the end, all aspects of musical perception are interconnected.

There are less immediate factors too, ranging from healthy guitar setup and effective restringing to building skill at retuning on the fly. Ill leave it up to you to adapt all this to non-standard tunings - its an ideal opportunity for some intuitive conceptual exploration, pushing your mind up a level as you get into the processes of modification and recombination.

And its vitally important that we place all this in the context of wider musical learning. For one thing, we must strengthen our ears over time, as this will drastically speed things up (this applies to pretty much everything else in music too).

We should also learn some of the science, visualizing how strings vibrate and seeing the fractional distribution of natural harmonics along them. See the full article for exercises, explanations, etc.

And if I have stumbled on any original insights, I ultimately owe them to the tutelage of Guy Harrup, the late, great jazz master of Bath, England, and Pandit Shivnath Mishra, my sitar guru in Benares. Guy, my first teacher, guided me through many different tunings with a relaxed, open-minded attitude, while the Pandits wordless lessons helped open my ears to the vivid, infinitely detailed world of sruti (Indian classical microtonality).

The impatient meditation, cooked up in honor of my two gurus, was (hastily) named for its attempt to maximize accuracy and minimize lost jamming time through some efficient, calming sonic focus.

Tuning up really can become a reliable way of bringing harmony to your mind as well as to your guitar, but lets be honest: it would be kind of strange to feel no impatience at all while preparing to jam (apologies to any enlightened Buddhist monks reading this).

See the original post here:

The ultimate guitar tuning guide: expand your mind with these advanced tuning techniques - Guitar World

Read More..

Christmas Lectures presenter Dr Hannah Fry on pigeons, AI and the awesome power of maths – inews

NewsScienceThe mathematician hopes to show the strengths and weaknesses of algorithms in this year's Royal Institution shows, she tells Rachael Pells

Monday, 23rd December 2019, 5:12 pm

Driverless cars, robot butlers and reusable rockets if the big inventions of the past decade and the artificial intelligence developed to create them have taught us anything, its that maths is undeniably cool. And if youre still not convinced, chances are youve never had it explained to you via a live experiment with a pigeon before.

We are trying to demonstrate how artificial intelligence works by pitting a kid against the pigeon, to see who can understand our instructions the quickest, she says. I hope to interview both of them after Well see how that goes.

The experiment is one of several wacky ideas to feature in this years lecture series airing on BBC Four, Secrets & Lies: The Hidden Power of Maths, through which Fry aims to demonstrate how maths, data and algorithms are at the heart of just about everything we do.

Humanising science

The reason for this particular experiment is to demonstrate how machines learn by way of reward which she hopes in turn will help to humanise AI.

Fry is only the third mathematician and the first female one to present the Christmas Lectures since their beginnings in 1825. Being asked to do so was incredibly exciting, she tells i. If you are a scientist and a communicator, this really is the pinnacle the thing that everyone wants to do.

A key theme across the seriess three talks will be the issue of uncertainty, and Fry wants to use the platform to encourage more public discussion around the use of algorithms, about where we want the limit of that to be.

What Fry hopes to achieve with her lectures

The lectures are designed to be immersive and fun; in lecture one, for example, Fry busts some myths about the idea that lifes big events come down to luck, and even claims to have found a mathematical formula for predicting how and when a person will fall in love.

But with AI taking centre stage, common cynicism and fears over a data-led future make an inevitable appearance. Lecture two reveals how data-gobbling algorithms have taken over our lives, while the final talk sets out to explore the limits of human control, including examples of calculations gone wrong and the algorithms behind fake news.

I want to be honest about the awesome power of these mathematical ideas, but also [demonstrate] the very real limitations of something that doesnt understand what it means to be human, she explains.

Algorithms are now so far advanced that they can be used to diagnose cancer by looking at an image, for example, which is amazing, really impressive but AI also makes mistakes, and that can have a really damaging effect, she says. The problem is we treat algorithms almost as though theyre the [ultimate] scientific determinism. She agrees that many of our misconceptions of AI come down to a lack of understanding of how it works, or even thinking that it is magical, and can answer every question hence the pigeon experiment.

Do we need to worry about AI?

As someone who does understand the calculations behind this mysterious beast, does she worry about AI? Is it true algorithms are controlling us?

The answer, she says, is that AI is both better and worse than imagined. For example, a lot of people are absolutely convinced their phones can listen to their conversations, but they cant not for any reason other than to do it technically is incredibly difficult and were not at the stage yet.

Our smartphones may not be secretly monitoring what is being said around them, but we are largely oblivious to the connections that algorithms can make about our lives, she caveats. You may never have Googled that mattress that is stalking you around the internet, but you may have searched back pain or poor sleep and the algorithms make the connection.

While the use of personal data for advertising in this way is something that makes Fry uncomfortable, she believes we have to accept that we made a kind of deal if we want a free internet.

She hopes that viewers will come away from the lectures armed with information but that they will also feel reassured. I think that narrative of humans versus machines is absolutely the wrong story, Fry concludes. The future of all this is going to be a partnership between humans and machines thats the only possible way that it can work.

Follow this link:

Christmas Lectures presenter Dr Hannah Fry on pigeons, AI and the awesome power of maths - inews

Read More..

Google CEO Sundar Pichai Is the Most Expensive Tech CEO to Keep Around – Observer

In his first year as Google chief, Pichai earned a base salary of $652,500. LLUIS GENE/AFP/Getty Images

From employee walkouts to congressional grilling, Google CEO Sundar Pichai has faced his fair share of adversity in 2019 as the public face of one of the worlds largest tech companies. But on a personal level, the Indian-American executive has had a great year, scoring not only a title bump, but also a giant pay raise.

Earlier this month, Pichai was appointed as CEO of Googles parent company, Alphabet, in addition to his existing responsibilities at Google. In that role, Pichai will receive $2 million in annual base salary starting next year, a company filing last Friday revealed.

SEE ALSO: 2019s Top 7 Tech IPO FlopsAnd Those Set for a Major 2020 Rebound

Although $2 million is within the range of what CEOs of comparable tech companies make these days (Per Equilar, CEOs of the largest U.S. companies made a median $1.2 million in 2018), Pichais Alphabet compensation package is an infinite jump from what his predecessor, Google cofounder Larry Page, earned in the same role: $1.

It was a common practice among tech entrepreneurs in Pages time to pay themselves a nominal salary and store most of their fortune in company stock to show their commitment to shareholders. Other notable CEOs who earn a $1 base salary include Facebooks Mark Zuckerberg, Twitters Jack Dorsey and OraclesLarry Ellison.

As CEO of Alphabet, Pichai will oversee about 30 specialized tech subsidiaries in addition to Google, including self-driving unit Waymo and AI lab DeepMind.

Also starting 2020, Pichai will receive a hefty $240 million in performance-based stock awards over the next three years, the largest award package Google has granted any executive.

Pichai, a materials engineer educated in both India and the U.S., joined Google in 2004 as a member of its management team. He climbed corporate ranks all the way to CEO in 2015. In his first year as Google chief, Pichai earned a base salary of $652,500. His total compensation skyrocketed the following year when Google approved a $199 million stock award package for him, the largest in the companys history at the time.

In addition to net paycheck, Google also spends generously to keep its CEO safe. Last year, Google recorded a $1.2 million expense under an account called CEO personal security allowance, which covered Pichais day-to-day security costs, use of company private jets and so on.

The security allowance in 2018 was almost twice the cost of the previous year due to a security upgrade in response to a violent shooting at YouTube (a Google subsidiary) headquarters in April 2018.

Read more here:

Google CEO Sundar Pichai Is the Most Expensive Tech CEO to Keep Around - Observer

Read More..

Machine Learning | Blog | Microsoft Azure

Tuesday, November 5, 2019

Enterprises today are adopting artificial intelligence (AI) at a rapid pace to stay ahead of their competition, deliver innovation, improve customer experiences, and grow revenue. AI and machine learning applications are ushering in a new era of transformation across industries from skillsets to scale, efficiency, operations, and governance.

Monday, October 28, 2019

Azure Machine Learning is the center for all things machine learning on Azure, be it creating new models, deploying models, managing a model repository and/or automating the entire CI/CD pipeline for machine learning. We recently made some amazing announcements on Azure Machine Learning, and in this post, Im taking a closer look at two of the most compelling capabilities that your business should consider while choosing the machine learning platform.

Wednesday, July 17, 2019

Today we are announcing the open sourcing of our recipe to pre-train BERT (Bidirectional Encoder Representations from Transformers) built by the Bing team, including code that works on Azure Machine Learning, so that customers can unlock the power of training custom versions of BERT-large models for their organization. This will enable developers and data scientists to build their own general-purpose language representation beyond BERT.

Tuesday, June 25, 2019

The next time you see your physician, consider the times you fill in a paper form. It may seem trivial, but the information could be crucial to making a better diagnosis. Now consider the other forms of healthcare data that permeate your lifeand that of your doctor, nurses, and the clinicians working to keep patients thriving.

Monday, June 10, 2019

Data scientists have a dynamic role. They need environments that are fast and flexible while upholding their organizations security and compliance policies. Notebook Virtual Machine (VM), announced in May 2019, resolves these conflicting requirements while simplifying the overall experience for data scientists.

Thursday, June 6, 2019

Build more accurate forecasts with the release of capabilities in automated machine learning. Have scenarios that require have gaps in training data or need to apply contextual data to improve your forecast or need to apply lags to your features? Learn more about the new capabilities that can assist you.

Tuesday, June 4, 2019

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality.

Wednesday, May 22, 2019

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience.

Thursday, May 9, 2019

Artificial intelligence (AI) has become the hottest topic in tech. Executives, business managers, analysts, engineers, developers, and data scientists all want to leverage the power of AI to gain better insights to their work and better predictions for accomplishing their goals.

Friday, May 3, 2019

With the exponential rise of data, we are undergoing a technology transformation, as organizations realize the need for insights driven decisions. Artificial intelligence (AI) and machine learning (ML) technologies can help harness this data to drive real business outcomes across industries. Azure AI and Azure Machine Learning service are leading customers to the world of ubiquitous insights and enabling intelligent applications such as product recommendations in retail, load forecasting in energy production, image processing in healthcare to predictive maintenance in manufacturing and many more.

Original post:

Machine Learning | Blog | Microsoft Azure

Read More..

AI and machine learning products – Cloud AI | Google Cloud

AI Platform Notebooks

An enterprise notebook service to launch projects in minutes

AI Platform Notebooks is a managed service whose integrated JupyterLab environment makes it easy to create instances that come pre-installed with the latest data science and ML frameworks and integrate with BigQuery, Cloud Dataproc, and Cloud Dataflow for easy development and deployment.

Preconfigured virtual machines for deep learning applications

Deep Learning VM Image makes it easy and fast to provision a VM quickly and effortlessly, with everything you need to get your deep learning project started on Google Cloud. You can launch Compute Engine instances pre-installed with popular ML frameworks like TensorFlow, PyTorch, or scikit-learn, and add Cloud TPU and GPU support with a single click.

Preconfigured and optimized containers for deep learning environments

Build your deep learning project quickly with a portable and consistent environment for developing, testing, and deploying your AI applications on Google Kubernetes Engine (GKE), AI Platform, Cloud Run, Compute Engine, Kubernetes, and Docker Swarm. Deep Learning Containers provide a consistent environment across Google Cloud services, making it easy to scale in the cloud or shift from on-premises.

Data preparation for machine learning model training

Use the AI Platform Data Labeling Service to request having human labelers label a collection of data that you plan to use to train a custom machine learning model. You can submit the representative samples to human labelers who annotate them with the "right answers" and return the dataset in a format suitable for training a machine learning model.

Distributed training with automatic hyper parameter tuning

Use AI Platform to run your TensorFlow, scikit-learn, and XGBoost training applications in the cloud. You can also use custom containers to run training jobs with other machine learning frameworks.

Model hosting service with serverless scaling

Host your trained machine learning models in the cloud and use AI Platform Prediction to infer target values for new data.

Model optimization using ground truth labels

Sample the prediction from trained machine learning models that you have deployed to AI Platform and provide ground truth labels for your prediction input using the continuous evaluation capability. The Data Labeling Service compares your models' predictions with the ground truth labels to provide continual feedback on your model performance.

Model evaluation and understanding using a code-free visual interface

Investigate model performances for a range of features in your dataset, optimization strategies, and even manipulations to individual datapoint values using the What-If Tool integrated with AI Platform.

Hardware designed for performance

Cloud TPUs are a family of hardware accelerators that Google designed and optimized specifically to speed up and scale up machine learning workloads for training and inference programmed with TensorFlow. Cloud TPUs are designed to deliver the best performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly.

The machine learning toolkit for Kubernetes

Kubeflow makes deployments of machine learning workflows on Kubernetes simple, portable, and scalable by providing a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

See the original post here:

AI and machine learning products - Cloud AI | Google Cloud

Read More..

TinyML as a Service and machine learning at the edge – Ericsson

This is the second post in a series about tiny machine learning (TinyML) at the deep IoT edge. Read our earlier introduction to TinyMl as-a-Service, to learn how it ranks in respect to traditional cloud-based machine learning or the embedded systems domain.

TinyML is an emerging concept (and community) to run ML inference on Ultra Low-Power (ULP ~1mW) microcontrollers. TinyML as a Service will democratize TinyML, allowing manufacturers to start their AI business with TinyML running on microcontrollers.

In this article, we introduce the challenges behind the applicability of ML concepts within the IoT embedded world. Furthermore, we emphasize how these challenges are not simply due to the constraints added by the limited capabilities of embedded devices but are also evident where the computation capabilities of ML-based IoT deployments are empowered by additional resources confined at the network edge.

To summarize the nature of these challenges, we can say:

Below, we take a closer look at each of these challenges.

Edge computing promises higher performing service provisioning, both from a computational and a connectivity point of view.

Edge nodes support the latency requirements of mission critical communications thanks to their proximity to the end-devices, and enhanced hardware and software capabilities allow execution of increasingly complex and resource-demanding services in the edge nodes. There is growing attention, investments and R&D to make execution of ML tasks at the network edge easier. In fact, there are already several ML-dedicated "edge" hardware examples (e.g. Edge TPU by Google, Jetson Nano by Nvidia, Movidius by Intel) which confirm this.

Therefore, the question we are asking is: what are the issues that the edge computing paradigm has not been able to completely solve yet? And how can these issues undermine the applicability of ML concepts in IoT and edge computing scenarios?

We intend to focus on and analyze five areas in particular: (Note: Some areas we describe below may have solutions through other emerging types of edge computing but are not yet commonly available).

Figure 1

The web and the embedded worlds feature very heterogeneous characteristics. Figure 1 (above) depicts how this high heterogeneity is characterized, by comparing qualitatively and quantitively the capacities of the two paradigms both from a hardware and software perspective. Web services can rely on powerful underlying CPU architectures with high memory and storage capabilities. From a software perspective, web technologies can be designed to choose and benefit from a multitude of sophisticated operating systems (OS) and complex software tools.

On the other hand, embedded systems can rely on the limited capacity of microcontroller units (MCUs) and CPUs that are much less powerful when compared with general-purpose and consumer CPUs. The same applies with memory and storage capabilities, where 500KB of SRAM and a few MBs of FLASH memory can already be considered a high resource. There have been several attempts to bring the flexibility of Linux-based systems in the embedded scenario (e.g. Yocto Project), but nevertheless most of 32bit MCU-based devices owns the capacity for running real-time operating systems and no more complex distribution.

In simple terms, when Linux can run, system deployment is made easier since software portability becomes straightforward. Furthermore, an even higher cross-platform software portability is also made possible thanks to the wide support and usage of lightweight virtualization technologies such as containers. With almost no effort, developers can basically ship the same software functionalities between entities operating under Linux distributions, as happens in the case of cloud and edge.

The impossibility of running Linux and container-based virtualization in MCUs represents one of the most limiting issue and bigger challenge for current deployments. In fact, it appears clear how in typical "cloud-edge-embedded devices" scenarios, cloud and edge services are developed and deployed with hardware and software technologies, which are fundamentally different and easier to be managed if compared to embedded technologies.

TinyML as-a-Service tries to tackle this issue by taking advantage of alternative (and lightweight) software solutions.

Figure 2

In the previous section, we considered on a high-level how the technological differences between web and embedded domains can implicitly and significantly affect the execution of ML tasks on IoT devices. Here, we analyze how a big technological gap exists also in the availability of ML-dedicated hardware and software web, edge, and embedded entities.

From a hardware perspective, during most of computing history there have been only a few types of processor, mostly available for general use. Recently, the relentless growth of artificial intelligence (AI) has led to the optimization of ML tasks for existing chip designs such as graphics processing units (GPUs), as well as the design of new dedicated hardware forms such as application specific integrated circuits (ASICs), which embed chips designed exclusively for the execution of specific ML operations. The common thread that connects all these new devices is their usage at the edge. In fact, these credit-card sized devices are designed with the idea of operating at the network edge.

At the beginning of this article we mentioned a few examples of this new family of devices (Edge TPU, Jetson Nano, Movidius). We foresee that in the near future even more big and small chip and hardware manufacturers will increasingly invest resources into the design and production of ML-dedicated hardware. However, it appears clear how, at least so far, there has not been the same effort in the embedded world.

Such a lack of hardware availability undermines somehow a homogeneous and seamless ML "cloud-to-embedded" deployments. In many scenarios, the software can help compensate for hardware deficiencies. However, the same boundaries that we find in the hardware sphere apply for the development of software tools. Today, in the web domain, there are hundreds of ML-oriented application software. Such availability is registering a constant growth thanks also to the possibility given by the different open source initiatives that allow passionate developers all over the world to merge efforts. The result is more effective, refined, and niche applications. However, the portability of these applications into embedded devices is not so straightforward. The usage of high-level programming languages (e.g., Python), as well as the large sizes of the software runtime (intended as both runtime system and runtime program lifecycle phase) are just some of the reasons why the software portability is painful if not impossible.

The main rationale behind the TinyML as-a-Service approach is precisely the one to break the existing wall between cloud/edge and embedded entities. However, to expect exactly the same ML experience in the embedded domain as we have in the web and enterprise world would be unrealistic. It is still an irrefutable fact that size matters. The execution of ML inference is the only operation that we reasonably foresee to be executed in an IoT device. We are happy to leave all the other cumbersome ML tasks, such as data processing and training, to the more equipped and resourceful side of the scenario depicted in Figure 2.

In the next article, we will go through the different features which characterize TinyML as-a-Service and share the technological approach underlying the TinyML as-a-Service concept.

In the meantime, if you have not read it yet, we recommend reading our earlier introduction to TinyMl as-a-Service.

The IoT world needs a complete ML experience. TinyML as-a-service can be one possible solution for making this enhanced experience possible, as well as expanding potential technology opportunities. Stay tuned!

Read the original:

TinyML as a Service and machine learning at the edge - Ericsson

Read More..

Kubernetes and containers are the perfect fit for machine learning – JAXenter

Machine learning is permeating every corner of the industry, from fraud detection to supply chain optimization to personalizing the customer experience. McKinsey has found that nearly half of enterprises have infused AI into at least one of their standard business processes, and Gartner says seven out of 10 enterpriseswill be using some form of AI by 2021. Thats a short two years away.

But for businesses to take advantage of AI, they need an infrastructure that allows data scientists to experiment and iterate with different data sets, algorithms, and computing environments without slowing them down or placing a heavy burden on the IT department. That means they need a simple, automated way to quickly deploy code in a repeatable manner across local and cloud environments and to connect to the data sources they need.

A cloud-native environment built on containers is the most effective and efficient way to support this type of rapid development, evidenced by announcements from big vendors like Googleand HPE, which have each released new software and services to enable machine learning and deep learning in containers. Much as containers can speed the deployment of enterprise applications by packaging the code in a wrapper along with its runtime requirements, these same qualities make containers highly practical for machine learning.

Broadly speaking, there are three phases of an AI project where containers are beneficial: exploration, training, and deployment. Heres a look at what each involves and how containers can assist with each by reducing costs and simplifying deployment, allowing innovation to flourish.

To build an AI model, data scientists experiment with different data sets and machine learning algorithms to find the right data and algorithms to predict outcomes with maximum accuracy and efficiency. There are various libraries and frameworksfor creating machine learning models for different problem types and industries. Speed of iteration and the ability to run tests in parallel is essential for data teams as they try to uncover new revenue streams and meet business goals in a reasonable timeframe.

Containers provide a way to package up these libraries for specific domains, point to the right data source and deploy algorithms in a consistent fashion. That way, data scientists have an isolated environment they can customize for their exploration, without needing IT to manage multiple sets of libraries and frameworks in a shared environment.

SEE ALSO:Unleash chaos engineering: Kubethanos kills half your Kubernetes pods

Once an AI model has been built, it needs to be trained against large volumes of data across different platforms to maximize accuracy and minimize resource utilization. Training is highly compute-intensive, and containers make it easy to scale workloads up and down across multiple compute nodes quickly. A scheduler identifies the optimal node based on available resources and other factors.

A distributed cloud environment also allows compute and storage to be managed separately, which cuts storage utilization and therefore costs. Traditionally, compute and storage were tightly coupled, but containers along with a modern data management plane allows compute to be scaled independently and moved close to the data, wherever it resides.

With compute and storage separate, data scientists can run their models on different types of hardware, such as GPUs and specialized processors, to determine which model will provide the greatest accuracy and efficiency. They can also work to incrementally improve accuracy by adjusting weightings, biases and other parameters.

In production, a machine learning application will often combine several models that serve different purposes. One model might summarize the text in a social post, for example, while another assesses sentiment. Containers allow each model to be deployed as a microservice an independent, lightweight program that developers can reuse in other applications.

Microservices also make it easier to deploy models in parallel in different production environments for purposes such as a/b testing, and the smaller programs allow models to be updated independently from the larger application, speeding release times, and reducing the room for error.

SEE ALSO:Artificial intelligence & machine learning: The brain of a smart city

At each stage of the process, containers allow data teams to explore, test and improve their machine learning programs more quickly and with minimal support from IT. Containers provide a portable and consistent environment that can be deployed rapidly in different environments to maximize the accuracy, performance, and efficiency of machine learning applications.

The cloud-native model has revolutionized how enterprise applications are deployed and managed by speeding innovation and reducing costs. Its time to bring these same advantages to machine learning and other forms of AI so that businesses can better serve their customers and compete more effectively.

See the original post:

Kubernetes and containers are the perfect fit for machine learning - JAXenter

Read More..

Another free web course to gain machine-learning skills (thanks, Finland), NIST probes ‘racist’ face-recog and more – The Register

Roundup As much of the Western world winds down for the Christmas period, here's a summary of this week's news from those machine-learning boffins who havent broken into the eggnog too early.

Finland, Finland, Finland: The Nordic country everyone thinks is part of Scandinavia but isnt has long punched above its weight on the technology front as the home of Nokia, the Linux kernel, and so on. Now the Suomi state is making a crash course in artificial intelligence free to all.

The Elements of AI series was originally meant to be just for Finns to get up to speed on the basics of AI theory and practice. Many Finns have already done so, but as a Christmas present, the Finnish government is now making it available for everyone to try.

The course takes about six weeks to complete, with six individual modules and is available in English, Swedish, Estonian, Finnish, and German. If you complete 90 per cent of the course and get 50 per cent of the answers right then the course managers will send you a nice certificate.

Meanwhile, don't forget there are many cool and useful free online courses on neural networks and the like, such as Fast.ai's excellent series and Stanford's top-tier lectures and notes.

Yep, AL still racist and sexist: A major study by the US National Institute of Standards and Technology, better known as NIST, has revealed major failings in today's facial-recognition systems.

The study examined 189 software algorithms from 99 developers, although interestingly Amazons Rekognition engine didnt take part, and the results arent pretty. When it came to recognizing Asian and African American faces, the algorithms were wildly inaccurate compared to matching Caucasian faces, especially with systems from US developers.

While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied, said Patrick Grother, a NIST computer scientist and the reports primary author.

While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.

For sale: baby shoes, never worn: As Hemmingway put it, the death of a child is one of the greatest tragedies that can occur, and Microsoft wants to do something about that using machine learning.

Redmond boffins worked with Tatiana Anderson and Jan-Marino Ramirez at Seattle Childrens Research Institute, in America, and Edwin Mitchell at the University of Auckland, New Zealand, to analyse Sudden Unexpected Infant Death (SUID) cases. Using a decades worth of data from the US Center for Disease Control (CDC), covering over 41 million births and 37,000 SUID deaths, the team sought to use specially prepared logistic-regression models to turn up some insights.

The results, published in the journal Pediatrics, were surprising: there was a clear difference between deaths that occurred in the first week after birth, dubbed SUEND, which stands for Sudden Unexpected Early Neonatal Death, and those that occurred between the first week and the end of a childs first year.

In the case of SUID, they found that rates were higher for unmarried, young mothers (between 15 and 24 years old), while this was not the case for SUEND cases. Instead, maternal smoking was highlighted as a major causative factor in SUEND situations, as were the length of pregnancy and birth weight.

The team are now using the model to look down other causative factors, be they genetic, environmental or something else. Hopefully such research will save many more lives in the future.

AI cracking calculus: Calculus, the bane of many schoolchildrens lives, appears to be right up AIs street.

A team of Facebook eggheads built a natural-language processing engine to understand and solve calculus problems, and compared the output with Wolfram Mathematica's output. The results were pretty stark: for basic equations, the AI solved them with 98 per cent accuracy, compared to 85 per cent for Mathematica.

With more complex calculations, however, the AIs accuracy drops off. It scored 81 per cent for a harder differential equation and just 40 per cent for more complex calculations.

These results are surprising given the difficulty of neural models to perform simpler tasks like integer addition or multiplication, the team said in a paper [PDF] on Arxiv. These results suggest that in the future, standard mathematical frameworks may benefit from integrating neural components in their solvers.

Deep-fake crackdown: Speaking of Facebook: today, the antisocial network put out an announcement that it had shut down two sets of fake accounts pushing propaganda. One campaign, originating in the country of Georgia, had 39 Facebook accounts, 344 Pages, 13 Groups, and 22 Instagram accounts, now all shut down. The network was linked to the nation's Panda advertising agency, and was pushing pro-Georgian-government material.

What's the AI angle? Here it is: the other campaign was based in Vietnam, and was devoted to influencing US voters using Western-looking avatars generated by deep-fake software a la thispersondoesnotexist.com.

Some 610 accounts, 89 Pages, 156 Groups and 72 Instagram accounts were shut down. The effort was traced to a group calling itself Beauty of Life (BL), which Facebook linked to the Epoch Media Group, a stateside biz that's very fond of President Trump and spent $9.5m in Facebook advertising to push its messages.

"The BL-focused network repeatedly violated a number of our policies, including our policies against coordinated inauthentic behavior, spam and misrepresentation, to name just a few," said Nathaniel Gleicher, Head of Security Policy at Facebook.

"The BL is now banned from Facebook. We are continuing to investigate all linked networks, and will take action as appropriate if we determine they are engaged in deceptive behavior."

Facebook acknowledged that it took the action as a result of its own investigation and "benefited from open source reporting." This almost certainly refers to bullshit-busting website Snopes, which uncovered the BL network last month.

Sponsored: How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

See the original post:

Another free web course to gain machine-learning skills (thanks, Finland), NIST probes 'racist' face-recog and more - The Register

Read More..