Page 2,845«..1020..2,8442,8452,8462,847..2,8502,860..»

Bitcoin and Encryption: A Race Between Criminals and the F.B.I. – The New York Times

Law enforcement also has an advantage when it gets ahold of digital devices. Despite claims from Apple, Google and even the Justice Department that smartphones are largely impenetrable, thousands of law enforcement agencies have tools that can infiltrate the latest phones to extract data.

Police today are facing a situation of an explosion of data, said Yossi Carmil, the chief executive of Cellebrite, an Israeli company that has sold data extraction tools to more than 5,000 law enforcement agencies, including hundreds of small police departments across the United States. The solutions are there. There is no real challenge to accessing the data.

The police also have an easier time getting to data stored in the cloud. Technology companies like Apple, Google and Microsoft regularly turn over customers personal data, such as photographs, emails, contacts and text messages, to the authorities with a warrant.

From January 2013 through June 2020, Apple said, it turned over the contents of tens of thousands of iCloud accounts to U.S. law enforcement in 13,371 cases.

And on Friday, Apple said that in 2018, it had unknowingly turned over to the Justice Department the phone records of congressional staff members, their families and at least two members of Congress, including Representative Adam B. Schiff of California, now the chairman of the House Intelligence Committee. The subpoena was part of an investigation by the Trump administration into leaks of classified information.

Yet intercepting communications has remained a troublesome problem for the police. While criminals used to talk over channels that were relatively simple to tap like phones, emails and basic text messages most now use encrypted messengers, which are not.

Two of the worlds most popular messaging services, Apples iMessage and Facebooks WhatsApp, use so-called end-to-end encryption, meaning only the sender and receiver can see the messages. Not even the companies have access to their contents, allowing Apple and Facebook to argue that they cannot turn them over to law enforcement.

Read more:
Bitcoin and Encryption: A Race Between Criminals and the F.B.I. - The New York Times

Read More..

How To Enable End-To-End Encryption In Zoom On Windows 10 – Wccftech

The pandemic has made Zoom one of the most popular video conferencing applications. When it comes to video conferencing apps, we always try to make sure our privacy settings are up to the mark to keep the communications secure.

Zoom is trying to ensure that users get the security they deserve, and it has an essential encryption feature that many people dont know about.End-to-end encryption ensures that even if you are hacked, the hacker will not be able to make any sense out of your data. It also keeps your data safe from the company itself.

How to Password Protect Google Search History

Zoom initially only encrypted data on its own servers, but with the end-to-end encryption feature, an encrypted key will be generated on the users computer, making your data truly secure. In today's tutorial,I will show you how to enable end-to-end encryption in Zoom on Windows 10 computers in just a few simple steps.

Step-1: Open Zoom App and sign in.

Step-2: Click on the settings cog on the top right corner of the app.

Step-3: Click on View More Settings at the bottom of the settings window.

How to Record FaceTime Calls on iPhone and iPad [Tutorial]

Step-4: You will be directed to the settings in your browser. Click on the Settings tab on the left side of your screen.

Step-5: Click on the Meeting tab.

Step-6: Scroll down till you reach the toggle switch for Allow use of end-to-end encryption. Turn it On. [If it is grey, it is Off. If it is blue, it is switched On]

Step-7: You will be asked to verify your number. After you enter your phone number. Click on Send Verification Code. You will then be sent a 6-digit code on your given number. Enter that code and then move on to the next step.

Step-8: After verification, your settings will be updated. Click on End-to-end encryption in the Default encryption type section.

Step-9: Click Save.

After following these steps, your Zoom will be end-to-end encrypted.

See original here:
How To Enable End-To-End Encryption In Zoom On Windows 10 - Wccftech

Read More..

We’ve been shown time and again that strong encryption puts crims behind bars, so why do politicos hate it? – The Register

Column Back in October, a call by spy agencies to weaken end-to-end encryption "because of the children" provoked a bit of analysis on how many times UK Home Secretaries had banged the same drum. All of them, it turned out. All of the time.

The argument is a bit beyond Priti Patel, alas, as she ran the threadbare rag up the flagpole yet again in April, presumably on the grounds that the 50th time's the charm.

The real world has not done her argument any favours in the weeks since. Last Wednesday, law-abiding citizens around the world enjoyed hearing about a massive collar-feeling spree courtesy of Operation Trojan Shield. This was a sting that did better than many a startup: it flogged a respectable 12,000 custom messaging devices to the, if you will, crimmunity before using the intercepted data to reel in getting on for a thousand of its least attractive members.

Not enough? You'll have to go back to, oh, the day before, when the great Colonial Crypto Cashback scheme was revealed. Here, the ransomware'd fuel pipeline saw $2m returned from the maw of the malware mob after the Feds not only intercepted the blaggers' Bitcoin wallet but also the keys. You know, the stuff built from unbreakable, completely secure encryptonium.

Finally, because we must Think Of The Children, we can skip back into the distant days of last month, when the German police closed down the world's biggest paedophile picture palace, despite it being on what the world calls the Big Scary Darknet and what we know as the internet but with extra relays. That has rather a lot of encryption. Yet again, though, the ringleaders got their doors dismantled by size 13s at dawn while the punters nervously await their own disk scan delights.

All these things and so, so many more have happened in spite of not having the ability to break strong encryption. It's not as if these were heroic, decade-long one-off events either. They've delivered exactly the sort of results that we're told are impossible, and delivered them spectacularly. These are arrests at scale: welcome to the world of the kiloscrote bust.

We're familiar with the marketing message that the internet scales, that with the right techniques and planning, you can have a good idea in the morning and half a billion users by teatime. The idea that this applies to policing as well is harder to take onboard, but the same drivers apply and the same benefits accrue to the police, admittedly, rather than their customers.

The reason so many cloud services are possible and profitable is that they easily match the technology to the market. Most of the hard work's been done for you: your customers are familiar and at ease with internet technologies. They trust them. They may not trust you, but that's your job. If you deliver a good service, you'll get a useful group of regulars who'll reward you, perhaps with money but more often with data.

Guess what. Criminals are people too. What they do generates data, exactly as your Aunty Heather does as she goes online shopping, only with more guns, drugs, and fraud. Or maybe not, depending on your family. Persuade criminals to use a particular service, and you can literally sit on your blue-trousered behind drinking institutional coffee and watch them send you all their secrets. Because it's the internet, you can do all this with a very small team running the system minimising the chances that mobster counter-intelligence will bribe their way into, steal, or spot what's going on.

Like all e-commerce, this depends on trust. As with all of us upstanding incorruptibles, the underworld does its research. It reads technical reportage, and it knows, as we know, that the basic mechanisms of standard encryption are mathematically secure for now and never without caveats, but good enough. So they happily assemble themselves in large groups of self-incriminating naughty people while Plod does the paperwork to swoop in and enjoy that 800-arrests-for-the-price-of-one online offer.

If they didn't trust the internet's encryption because of laws ensuring its insecurity, they wouldn't do this. They wouldn't stop being criminals, but they'd move on to doing something safer and more profitable most likely finding ways to jemmy open the state-mandated back doors and make off with all our transactions. Not so much win-win but the other thing, oh, what is it ah yes, lose-lose.

The evidence piles up day after day, week after week, world-weary Reg column after world-weary Reg column.

State-mandated insecure encryption is a very bad idea. You can't make anything more secure by making it less secure.

Good old-fashioned policing backed up by well-funded technical expertise and lots of human intelligence works just fine, and it bolsters, rather than threatens, the rights and protection of citizens. Yes, even the children. Think about that, Priti.

Visit link:
We've been shown time and again that strong encryption puts crims behind bars, so why do politicos hate it? - The Register

Read More..

WhatsApp to Enable Multi-Device Support With End-to-End Encryption: Report – Gadgets 360

WhatsApp will make its multi-device support available with end-to-end encryption, according to a report. The Facebook-owned instant messaging app has marketed its privacy-focussed encryption for some time. It is claimed to protect text and voice messages, photos, videos, documents, and calls in a way that they aren't accessible by anyone except the sender and receiver. However, enabling the same level of protection on multiple devices alongside syncing communication between them is not that easy and involves technical challenges in its implementation.

Although WhatsApp is yet to provide official details, WhatsApp beta tracker WABetaInfo has reported that the end-to-end encryption available on WhatsApp will be compatible with its upcoming multi-device support.

Earlier this month, Mark Zuckerbergmentionedin an alleged conversation with WABetaInfo that chats when using multi-device support on WhatsApp will still be end-to-end encrypted. Screenshots shared by WABetaInfo showed that the Facebook CEO stated that the company solved the challenges involved in implementing end-to-end encryption in an elegant way to make sure that the chats between users are protected even when using the messaging app on multiple devices.

WhatsApp was thought to be working on enabling multi-device support since at least July 2019. The feature lets users simultaneously access the app on up to four devices. It seems to be at a final stage of its internal testing as screenshots detailing the new addition appeared online in the recent past. WhatsApp Head Will Cathcart also purportedly noted in the messages exchanged with WABetaInfo that the new addition could be provided in a public beta in the next month or two.

Alongside enabling end-to-end encryption when using multi-device support, WhatsApp is said to be bringing end-to-end encrypted backups. There is, however, no exact timeline on when it would be available even for public beta testers.

WhatsApp uses Signal's encryption protocol for offering end-to-end encrypted communication experience on its app. Competitors including Google Messages also embraced the same protection method to address privacy concerns raised by digital activists. However, since end-to-end encryption limits traceability on platforms, governments and regulators in some countries including India have demanded ways to get a backdoor entry.

Does WhatsApp's new privacy policy spell the end for your privacy? We discussed this on Orbital, the Gadgets 360 podcast. Orbital is available on Apple Podcasts, Google Podcasts, Spotify, and wherever you get your podcasts.

Read more:
WhatsApp to Enable Multi-Device Support With End-to-End Encryption: Report - Gadgets 360

Read More..

Pepperdine Graziadio Business School Taps Data Science to Help Prospective MBA Students Project Their Career and Earnings Potential – PRNewswire

MALIBU, Calif., June 16, 2021 /PRNewswire/ -- Pepperdine Graziadio Business School today announced the launch of a new initiative that will enable students to forecast the labor market value of earning a graduate business degree. In collaboration with Seattle-based data science startup AstrumU, the Pepperdine Graziadio Business School is pioneering the use of a new machine learning tool to help each prospective student estimate their return on investment from full-time and professional MBA degree programs at the front end of the enrollment process.

"As MBA candidates navigate an ever-changing world of work and a more competitive job market, it's critically important that business schools demonstrate the lasting relevance and return on investment that our alumni can expect after graduating," said Deryck J. van Rensburg, dean of the Pepperdine Graziadio Business School. "This work is about providing prospective MBA students with tangible insights based on alumni employment outcomes. It's about getting more transparent about how the MBA experience connects them to real-world opportunities for growth and advancement."

With AstrumU's Enrollment Marketing Toolkit, staff, and administrators can analyze labor market, alumni, and employer data to demonstrate the economic and career trajectories of Pepperdine Graziadio Business School MBA alumni to prospective students. The technology will enable the business school to use sophisticated data science models to match course-level outcomes, academic performance, and extracurricular experiences with salary and job placement outcomes from data verified by employers. Students then receive a personalized prediction for their desired industry, based on how alumni with comparable career backgrounds and goals fared in the labor market.

Using the same data, admissions counselors can easily personalize their communications with prospective students and enhance conversations regarding how degree programs can help to facilitate their personal and professional aspirations.

"With an increasingly competitive landscape for graduate programs and a rapidly changing labor market, students are becoming more and more discerning about the programs they select and are hungry for better information on how their educational experiences will translate into economic opportunity in the workforce," said Adam Wray, founder and CEO of AstrumU. "Forward-thinking institutions like Pepperdine Graziadio Business School are designing new ways to build transparency around tangible employment outcomes into the admissions process itself. It's helping to not just improve enrollment outcomes, but ultimately give students a greater degree of choice and agency as they chart their educational and career future."

The Pepperdine Graziadio Business School is one of the first graduate schools of business to pilot the new program. A total of twenty universities will form an initial cohort of pioneering institutions who will gain early access to the tool to boost student enrollment and retention using insights from the platform's analysis of millions of student educational and career journeys.

Founded in 1969, the Graziadio Business School offers a variety of business degree programs including full-time and part-time MBA programs, joint degree programs, as well as other executive doctorate, master's, and bachelor's degree programs. Programs are offered both online and across Pepperdine University's five California campuses.

About AstrumU: AstrumU translates educational experiences into economic opportunity. We are on a mission to quantify the return on education investment for learners, education providers, and employers. We help institutions measure the value created for incoming and returning students, while assisting them in securing industry partnerships that lead students seamlessly into high-demand career pathways. Institutions partner with AstrumU to drive enrollment and increase alumni and corporate engagement, while extending economic mobility opportunities inclusively to all learners.

About Pepperdine University Graziadio Business School: For more than 50 years, the Pepperdine Graziadio Business School has challenged individuals to think boldly and drive meaningful change within their industries and communities. Dedicated to developing Best for the World Leaders, the Graziadio School offers a comprehensive range of MBA, MS, executive, and doctoral degree programs grounded in integrity, innovation, critical thinking, and entrepreneurship. The Graziadio School advances experiential learning through small classes with distinguished faculty that stimulate critical thinking and meaningful connection, inspiring students and working professionals to realize their greatest potential as values-centered leaders. Follow Pepperdine Graziadio onFacebook,Twitter,Instagram, andLinkedIn.

SOURCE AstrumU

Read more here:

Pepperdine Graziadio Business School Taps Data Science to Help Prospective MBA Students Project Their Career and Earnings Potential - PRNewswire

Read More..

What Data Scientists Learned by Modeling the Spread of Covid-19 – Smithsonian Magazine

In March 2020, as the spread of Covid-19 sent shockwaves around the nation, integrative biologist Lauren Ancel Meyers gave a virtual presentation to the press about her findings. In talking about how the disease could devastate local hospitals, she pointed to a graph where the steepest red curve on it was labeled: no social distancing. Hospitals in the Austin, Texas, area would be overwhelmed, she explained, if residents didnt reduce their interactions outside their household by 90 percent.

Meyers, who models diseases to understand how they spread and what strategies mitigate them, had been nervous about appearing in a public event and even declined the invitation at first. Her team at the University of Texas at Austin had just joined the city of Austins task force on Covid and didnt know how, exactly, their models of Covid would be used. Moreover, because of the rapidly evolving emergency, her findings hadnt been vetted in the usual way.

We were confident in our analyses but had never gone public with model projections that had not been through substantial internal validation and peer review, she writes in an e-mail. Ultimately, she decided the public needed clear communication about the science behind the new stay-at-home order in and around Austin.

The Covid-19 pandemic sparked a new era of disease modeling, one in which graphs once relegated to the pages of scientific journals graced the front pages of major news websites on a daily basis. Data scientists like Meyers were thrust into the public limelightlike meteorologists forecasting hurricanes for the first time on live television. They knew expectations were high, but that they could not perfectly predict the future. All they could do was use math and data as guides to guess at what the next day would bring.

As more of the United States population becomes fully vaccinated and the nation approaches a sense of pre-pandemic normal, disease modelers have the opportunity to look back on the last year-and-a-half in terms of what went well and what didnt. With so much unknown at the outsetsuch as how likely is an individual to transmit Covid under different circumstances, and how fatal is it in different age groupsits no surprise that forecasts sometimes missed the mark, particularly in mid-2020. Models improved as more data became available on not just disease spread and mortality, but also on how human behavior sometimes differed from official public health mandates.

Modelers have had to play whack-a-mole with challenges they didnt originally anticipate. Data scientists didnt factor in that some individuals would misinterpret or outright ignore the advice of public health authorities, or that different localities would make varying decisions regarding social-distancing, mask-wearing and other mitigation strategies. These ever-changing variables, as well as underreported data on infections, hospitalizations and deaths, led models to miscalculate certain trends.

Basically, Covid threw everything at us at once, and the modeling has required extensive efforts unlike other diseases, writes Ali Mokdad, professor at the Institute for Health Metrics and Evaluation, IHME, at the University of Washington, in an e-mail.

Still, Meyers considers this a golden age in terms of technological innovation for disease modeling. While no one invented a new branch of math to track Covid, disease models have become more complex and adaptable to a multitude of changing circumstances. And as the quality and amount of data researchers could access improved, so did their models.

A model uses math to describe a system based on a set of assumptions and data. The less information available about a situation so far, the worse the model will be at both describing the present moment and predicting what will happen tomorrow.

So in early 2020, data scientists never expected to exactly divine the number of Covid cases and deaths on any given day. But they aimed to have some framework to help communities, whether on a local or national level, prepare and respond to the situation as well as they could.

Models are like guardrails to give some sense of what the future may hold, says Jeffrey Shaman, director of the Climate and Health Program at the Columbia University Mailman School of Public Health.

You need to sort of suss out what might be coming your way, given these assumptions as to how human society will behave, he says. And you have to change those assumptions, so that you can say what it may or may not do.

The Covid crisis also led to new collaborations between data scientists and decision-makers, leading to models oriented towards actionable solutions. When researchers partnered with public health professionals and other local stakeholders, they could tailor their forecasts toward specific community concerns and needs.

Meyers team has been an integral part of the Austin areas Covid plans, meeting frequently with local officials to discuss the latest data, outlook and appropriate responses. The municipal task force brings together researchers with the mayor, the county judge, public health authorities, CEOs of major hospitals and the heads of public school systems. Meyers says this data-driven approach to policy-making helped to safeguard the citycompared to the rest of Texas, the Austin area has suffered the lowest Covid mortality rates.

In the last year, we've probably advanced the art and science and applications of models as much as we did in probably the preceding decades, she says.

At the heart of Meyers groups models of Covid dynamics, which they run in collaboration with the Texas Advanced Computing Center, are differential equationsessentially, math that describes a system that is constantly changing. Each equation corresponds to a state that an individual could be in, such as an age group, risk level for severe disease, whether they are vaccinated or not and how those variables might change over time. The model then runs these equations as they relate to the likelihood of getting Covid in particular communities.

Differential equations have been around for centuries, and the approach of dividing a population into groups who are susceptible, infected, and recovered dates back to 1927. This is the basis for one popular kind of Covid model, which tries to simulate the spread of the disease based on assumptions about how many people an individual is likely to infect.

But Covid demanded that data scientists make their existing toolboxes a lot more complex. For example, Shaman and colleagues created a meta-population model that included 375 locations linked by travel patterns between them.

Using information from all of those cities, We were able to estimate accurately undocumented infection rates, the contagiousness of those undocumented infections, and the fact that pre-symptomatic shedding was taking place, all in one fell swoop, back in the end of January last year, he says.

The IHME modeling began originally to help University of Washington hospitals prepare for a surge in the state, and quickly expanded to model Covid cases and deaths around the world. In the spring of 2020, they launched an interactive website that included projections as well as a tool called hospital resource use, showing at the U.S. state level how many hospital beds, and separately ICU beds, would be needed to meet the projected demand. Mokdad says many countries have used the IHME data to inform their Covid-related restrictions, prepare for disease surges and expand their hospital beds.

As the accuracy and abundance of data improved over the course of the pandemic, models attempting to describe what was going on got better, too.

In April and May of 2020 IHME predicted that Covid case numbers and deaths would continue declining. In fact, the Trump White House Council of Economic Advisers referenced IHMEs projections of mortality in showcasing economic adviser Kevin Hassetts cubic fit curve, which predicted a much steeper drop-off in deaths than IHME did. Hassetts model, based on a mathematical function, was widely ridiculed at the time, as it had no basis in epidemiology.

But IHMEs projections of a summertime decline didnt hold up, either. Instead, the U.S. continued to see high rates of infections and deaths, with a spike in July and August.

Mokdad notes that at that time, IHME didnt have data about mask use and mobility; instead, they had information about state mandates. They also learned over time that state-based restrictions did not necessarily predict behavior; there was significant variation in terms of adhering to protocols like social-distancing across states. The IHME models have improved because data has improved.

Now we have mobility data from cell phones, we have surveys about mask-wearing, and all of this helps the model perform better, Mokdad says. It was more a function of data than the model itself.

Better data is having tangible impacts. At the Centers for Disease Control and Prevention, Michael Johansson, who is leading the Covid-19 modeling team, noted an advance in hospitalization forecasts after state-level hospitalization data became publicly available in late 2020. In mid-November, the CDC gave all potential modeling groups the goal of forecasting the number of Covid-positive hospital admissions, and the common dataset put them on equal footing. That allowed the CDC to develop ensemble forecastsmade through combining different modelstargeted at helping prepare for future demands in hospital services.

This has improved the actionability and evaluation of these forecasts, which are incredibly useful for understanding where healthcare resource needs may be increasing, Johansson writes in an e-mail.

Meyers initial Covid projections were based on simulations she and her team at the University of Texas, Austin, had been working on for more than a decade, since the 2009 H1N1 flu outbreak. They had created online tools and simulators to help the state of Texas plan for the next pandemic. When Covid-19 hit, Meyers team was ready to spring into action.

The moment we heard about this anomalous virus in Wuhan, we went to work, says Meyers, now the director of the UT Covid-19 Modeling Consortium. I mean, we were building models, literally, the next day.

Researchers can lead policy-makers to mathematical models of the spread of a disease, but that doesnt necessarily mean the information will result in policy changes. In the case of Austin, however, Meyers models helped convince the city of Austin and Travis County to issue a stay-at-home order in March of 2020, and then to extend it in May.

The Austin area task force came up with a color-coded system denoting five different stages of Covid-related restrictions and risks. Meyers team tracks Covid-related hospital admissions in the metro area on a daily basis, which forms the basis of that system. When admission rates are low enough, lower stage for the area is triggered. Most recently, Meyers worked with the city to revise those thresholds to take into account local vaccination rates.

But sometimes model-based recommendations were overruled by other governmental decisions.

In spring 2020, tension emerged between locals in Austin who wanted to keep strict restrictions on businesses and Texas policy makers who wanted to open the economy. This included construction work, which the state declared permissible.

Because of the nature of the job, construction workers are often in close contact, heightening the threat of viral exposure and severe disease. In April 2020, Meyers groups modeling results showed that the Austin areas 500,000 construction workers had a four-to-five times greater likelihood of being hospitalized with Covid than people of the same age in different occupational groups.

The actual numbers from March to August turned out strikingly similar to the projections, with construction workers five times more likely to be hospitalized, according to Meyers and colleagues analysis in JAMA Network Open.

Maybe it would have been even worse, had the city not been aware of it and tried to try to encourage precautionary behavior, Meyers says. But certainly it turned out that the risks were much higher, and probably did spill over into the communities where those workers lived.

Some researchers like Meyers had been preparing for their entire careers to test their disease models on an event like this. But one newcomer quickly became a minor celebrity.

Youyang Gu, a 27-year-old data scientist in New York, had never studied disease trends before Covid, but had experience in sports analytics and finance. In April of 2020, while visiting his parents in Santa Clara, California, Gu created a data-driven infectious disease model with a machine-learning component. He posted death forecasts for 50 states and 70 other countries at covid19-projections.com until October 2020; more recently he has looked at US vaccination trends and the path to normality.

While Meyers and Shaman say they didnt find any particular metric to be more reliable than any other, Gu initially focused only on the numbers of deaths because he thought deaths were rooted in better data than cases and hospitalizations. Gu says that may be a reason his models have sometimes better aligned with reality than those from established institutions, such as predicting the surge in in the summer of 2020. He isnt sure what direct effects his models have had on policies, but last year the CDC cited his results.

Today, some of the leading models have a major disagreement about the extent of underreported deaths. The IHME model made a revision in May of this year, estimating that more than 900,000 deaths have occurred from Covid in the U.S., compared with the CDC number of just under 600,000. IHME researchers came up with the higher estimate by comparing deaths per week to the corresponding week in the previous year, and then accounting for other causes that might explain excess deaths, such as opioid use and low healthcare utilization. IHME forecasts that by September 1, the U.S. will have experienced 950,000 deaths from Covid.

This new approach contradicts many other estimates, which do not assume that there is such a large undercount in deaths from Covid. This is another example of how models diverge in their projections because different assumed conditions are built into their machinery.

Covid models are now equipped to handle a lot of different factors and adapt in changing situations, but the disease has demonstrated the need to expect the unexpected, and be ready to innovate more as new challenges arise. Data scientists are thinking through how future Covid booster shots should be distributed, how to ensure the availability of face masks if they are needed urgently in the future, and other questions about this and other viruses.

We're already hard at work trying to, with hopefully a little bit more lead time, try to think through how we should be responding to and predicting what COVID is going to do in the future, Meyers says.

View original post here:

What Data Scientists Learned by Modeling the Spread of Covid-19 - Smithsonian Magazine

Read More..

8 Data Science Trends to Watch This Year The Tech Report – The Tech Report

More data is being collected now than ever before in human history. Data science the field of analyzing, organizing, and gleaning insights from that data is becoming increasingly important to the governments and private companies that collect this data.

Here are some of the developing trends in data science that will shape the field this year and beyond.

Python has developed a reputation as one of the most versatile and powerful programming languages in use. This is for good reason. Pythons popularity is rooted in its simple and accessible syntax along with its statistical and analytical visualizations. It also has massive support in the form of a dedicated online community.

Python is set to become the go-to programming language for data science. Why? Because the object-oriented programming (OOP) concept is ideal when dealing with large datasets. Additionally, the aforementioned simple syntax allows programmers to accomplish a great deal with only a few lines of code.

Cybersecurity continues to be a major concern, with cybersecurity attacks up worldwide. More private and sensitive data is being collected than ever before. While the world will likely always need dedicated cybersecurity experts, artificial intelligence is starting to pick up some of the load.

AI cybersecurity takes some of the burdens of human cybersecurity experts. It does this by processing large amounts of data faster than humans can. AI can detect potential security threats, vulnerabilities in code, and other suspicious activities. It can also use predictive analysis to address security threats before they start.

AI can also be used to address typical weak points in network security such as weak passwords. It does this by integrating security measures such as biometrics or facial recognition.

With the ever-increasing volume of data being collected, driven partially by the Internet of Things, there is more demand than ever for skilled data scientists.

Despite its unique strengths, AI cannot handle every aspect of data science. Data scientists are needed to sort and organize much of that data before it can be meaningfully analyzed by AI. Someone looking to pursue a data science degree is likely to find themselves with a promising array of career options going forward.

Blockchain is an emerging technology that uses decentralized nodes of information to create secure, validated chunks of data. These chunks cant be tampered with, manipulated, or falsified.

Blockchain technology is poised to disrupt certain aspects of data science as both deal fields deal with large amounts of data. While its yet to be fully explored, theres a developing trend toward integration between blockchain and data science. This typically relies on blockchain primarily for data integrity and security. Data science, for its part, emphasizes prediction and actionable insights.

More companies will migrate their data and services to the cloud. This represents an attempt to cut investment costs and increase revenue.

Data science, by nature, requires massive amounts of data. Moving the means to process and store it to the cloud frees up local resources and reduces operating costs. Cloud providers offer pay-as-you-go resources such as databases, storage, and runtime.

Already important to the field, data visualization tools are becoming an ever more vital part of data science.

Visualization provides the key to identifying patterns, finding outliers, gleaning insight, and otherwise gaining an understanding of large amounts of data. Not only is this important to data scientists themselves but its also critical in helping present conclusions and insights to stakeholders and clients. Graphical tools, maps, graphs, charts, and other visualization techniques and the tools to help create them will play an increasing part in the application of data science.

Low-code and no-code platforms are creating some beneficial disruption in the software field.

LCNC platforms increase the accessibility of software solutions by creating application-development platforms that use intuitive, easy-to-use interfaces. These allow users to work without having much (or any) programming experience. Using a no-code platform, a user could create an application using drag-and-drop menus. That user could also use simple interfaces to build an application without having to write any code at all.

MLOps seeks to promote the best practices for using AI in data science and business.

A developing field just starting to get attention, MLOps grew out of DevOps. Its now set to help machine learning become an everyday part of mainstream business and data science. Data scientists are using MLOps to build efficient AI models and curate datasets in precise, disciplined ways. This practice will help create more robust AI and machine learning models that scale and evolve with changing needs.

More:

8 Data Science Trends to Watch This Year The Tech Report - The Tech Report

Read More..

3 Types of Data Science SEO Teams and How They Work – Search Engine Journal

When it comes to successful data science for SEO, nothing is more important than having the right team in place.

Challenges in obtaining and ensuring the consistency of the data, as well as in your choice of machine learning models and in the associated analyses, all benefit from having team members with different skill sets collaborating to solve them.

This article presents the three main types of teams, who is on them, and how they work.

Lets open the floor with that loneliest of data science SEO professionals the team of one.

The one-person team is often the reality in small and large structures alike. There are plenty of versatile people out there who can manage both the SEO and the data functions on their own.

The lone data science SEO professional can generally be described as an SEO expert who has decided to take advanced courses in computer science to focus on a more technical side of SEO.

Advertisement

Continue Reading Below

They have mastered at least one programming language (such as R or Python) and use machine learning algorithms.

They are closely following Google updates like Rankbrain, BERT, and MUM, as Google has been injecting increasingly more machine learning and AI into its algorithms.

These pros must be skilled in the automation of SEO processes to scale their efforts. This might include:

In my organization, we share these SEO use cases in the form of a Jupyter Notebook. However, it is easy to automate them using Papermill or DeepNote (which now offers an automatic mode to launch Jupyter Notebooks regularly) in order to run them daily.

If you want to mix it up and enhance your professional value, there are excellent training courses for SEO enthusiasts to learn data science and conversely, for data scientists to learn SEO, as well.

Advertisement

Continue Reading Below

The only limit is your motivation to learn new things.

Some prefer working alone; after all, it eliminates any of the bureaucracy or politics you might (but dont necessarily have to) find in larger teams.

But as the French proverb goes: Alone we go faster; together we go further.

Even if projects are completed quickly, they may end up as successful as they could have had there been a wider range of skills and experience at the table.

Now, lets leave the solitary SEO and move on to teams of two people.

You may already know MVP as a Minimum Viable Product. This format is widely used in agile methods where the project starts with a prototype that evolves in one- to three-week iterations.

The MVT is the equivalent for a team. This team structure can help minimize the risks and costs of the project even while bringing more diverse perspectives to the table.

It consists of creating a team with only two members with complementary skill sets often an SEO expert who also understands the mechanisms of machine learning, and a developer who tests ideas.

The team is formed for a limited period of time; typically about 6 weeks.

If we take content categorization for an ecommerce site, for example, often one person will test a method and implement the most efficient one.

However, an MVT could perform more complex tests with several models simultaneously keeping the categorization that comes up the most often and adding image categorization, for example.

This can be done automatically with all existing templates. The current technology makes it possible to reach 95% of correct results, beyond which point the granularity of the results comes into play.

PapersWithCode.com can help you stay up to date with the current state of technology in each field (such as text generation), and will most importantly provide the source code.

GPT-3 from OpenAI, for example, can be used for prescriptive SEO to recommend actions for text summarization, text generation, and image generation, all with impressive quality.

Advertisement

Continue Reading Below

Come back in time with me for a moment and lets take a look at one of the best collaborations of all time: The A-Team.

Everyone on this iconic team had a specific role and as a result, they succeeded brilliantly in each of their collective missions.

Unfortunately, there were no episodes on SEO. But what might your data science SEO task force look like?

You will surely need an SEO expert working closely with a data scientist and a developer. Together, this team will run the project, prepare the data, and use the machine learning algorithms.

The SEO expert is best positioned to double as a project manager and handle communication with management and external stakeholders. (In larger companies, there may be dedicated roles for the teams manager and project leader.)

Here are several examples of projects that this type of team might be responsible for:

Advertisement

Continue Reading Below

Of course, teams need tools to maximize their efforts. This brings us to the idea of data SEO-compliant software.

I believe there are three principles to adhere to carefully here in order to avoid using black-box tools that give you results without explaining their methodologies and algorithms.

1. Access to documentation that clearly explains the algorithms and parameters of the machine learning model.

2. The ability to reproduce the results yourself on a separate dataset to validate the methodology. This doesnt mean copying software: all the challenges are in the performance, security, reliability, and industrialization of machine learning models, not in the model or the methodology itself.

3. The tool must have followed a scientific approach by communicating the context, the objectives, the methods tested, and the final results.

Data SEO is a scientific approach to optimizing for search that relies on data analysis and the use of data science to make decisions.

Whatever your budget, it is possible to implement data science methods. The current trend is that concepts used by data scientists are becoming increasingly accessible to anyone interested in the field.

Advertisement

Continue Reading Below

It is now up to you to take ownership of your own data science projects with the right skills and the teams. To your data science SEO success!

More Resources:

See the original post:

3 Types of Data Science SEO Teams and How They Work - Search Engine Journal

Read More..

DrivenData and HeroX Announce Winners of NIST’s Synthetic Data Challenge – The Grand Junction Daily Sentinel

BOULDER, Colo., June 16, 2021 /PRNewswire/ --DrivenData, the host of data science competitions that advance solutions for social good, and HeroX, the social network for innovation and the world's leading platform for crowdsourced solutions, today announced the winners of the third and final sprint of the Algorithm Contest of the Differential Privacy Temporal Map Challenge, which was sponsored by the Public Safety Communications Research (PSCR) Division of the National Institute of Standards and Technology (NIST).

With a prize purse totaling $161,000 across the entire challenge, today's announcement of the third algorithm sprint offered $25,000 to the first place winner. The team, "N - CRiPT", a group of differential privacy researchers from the National University of Singapore, Alibaba Group, secured first place. Their goal was to bring differential privacy into a practical setting. The second place winner was the "Minutemen" team, a group of differential privacy graduate students from the University of Massachusetts Amherst.

The focus of this prize challenge was to create synthetic data that preserves the characteristics of a dataset containing time and geographic information. Synthetic data has the ability to offer greater privacy protections than traditional anonymization techniques. Differentially private synthetic data can be shared with researchers, policy makers, and even the public without the risk of exposing individuals in the original data. However, the synthetic records are only useful if they preserve the trends and relationships in the original data.

Contestants of this challenge were charged with developing algorithms that de-identify datasets while maintaining a high level of accuracy. This ensures the data is both private and useful. Top contestants of the final sprint demonstrated algorithms that produce records with both more privacy and greater accuracy than the typical subsampling techniques used by many government agencies to release records.

The first sprint featured data captured from 911 calls in Baltimore, MD made over the course of one year. Participants in this sprint were tasked with developing de-identification algorithms designed to generate privatized data sets using the monthly reported incident counts for each type of incident by neighborhood. Winners were announced here.

The second sprint used demographic data from the U.S. Census Bureau's American Community Survey which surveyed individuals in various U.S. states from 2012 to 2018.The data set included 35 different survey features (such as age, sex, income, education, work and health insurance data) for every individual surveyed.Simulated longitudinal data was created by linking different individual records across multiple years, which increased the difficulty of protecting each simulated person's privacy. To succeed in this sprint, participants needed to build de-identification algorithms by generating a set of synthetic, privatized survey records that most accurately preserved the patterns in the original data. Winners were announced here.

The third sprint centered around taxi rides taken in Chicago, Illinois.Because the sprint focused on protecting the taxi drivers rather than just their trips, competitors needed to provide privacy for up to 200 records per individual driver, a very challenging problem. They were evaluated over 77 Chicago community areas.The deidentified synthetic data needed to preserve the characteristics of taxi trips in each community area, the patterns of traffic between communities, as well as the population characteristics of taxi drivers themselves (typical working times and locations). The top two winning teams were each able to produce synthetic data that provided very strong privacy protection and was also more accurate for analysis than data protected by traditional privacy techniques such as subsampling.

Challenge participants are now eligible to earn up to $5000 for creating and executing a development plan that further improves the code quality of solutions and advances their usefulness to the public safety community. Participants can also earn the Open Source prize, an additional $4000, by releasing their solutions in an open source repository. Winning solutions will be those that meet differential privacy after being uploaded to an open source repository.

DrivenData is a social enterprise dedicated to bringing the data tools and methods that are transforming industry to the world's biggest challenges. As part of that work, DrivenData's competition platform channels the skills and passion of data scientists, researchers, and other quantitative experts to build solutions for social good. These online machine learning challenges are designed to engage a large expert community, connect them with real-world data problems, and highlight their best solutions.

HeroX is a social network for crowdsourcing innovation and human ingenuity, co-founded in 2013 by serial entrepreneur, Christian Cotichini and XPRIZE Founder and Futurist, Peter Diamandis. HeroX offers a turnkey, easy-to-use platform that supports anyone, anywhere, to solve everyday business and world challenges using the power of the crowd. Uniquely positioned as the Social Network for Innovation, HeroX is the only place you can build, grow and curate your very own crowd.

To learn about eligibility requirements, visit challenge.gov, and for additional information about the challenge, visit DrivenData.org.

NIST, a nonregulatory agency of the U.S. Department of Commerce, promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life. To learn more about NIST, visitNIST.gov.

To arrange an interview and/or any media inquiries with NIST, please contact Jennifer Huergo at (202) 309-1027 and jennifer.huergo@nist.gov.

Visit link:

DrivenData and HeroX Announce Winners of NIST's Synthetic Data Challenge - The Grand Junction Daily Sentinel

Read More..

UW Extended Campus, in partnership with UW System campuses, to launch five new online certificates this fall – University of Wisconsin System

MADISON, Wis.UW Extended Campus, in partnership with University of Wisconsin System campuses, will offer five new online certificates aligned with industry needs and high-growth occupations in September 2021. The new programs include graduate-level, semester-based certificates in Applied Bioinformatics, Data Science, Senior Living and Services Leadership, and Sustainability and Well-being; and an undergraduate-level certificate in Health Care Informatics, offered in the UW Flexible Option competency-based format.

All 13 UW System campuses are participating in at least one certificate program. UW Extended Campus, in collaboration with the universities, makes advanced education possible through flexible, online programs that combine the diverse expertise and resources of UW campuses and faculty.

UW Extended Campus is a flexible, convenient education option for adult learners or anyone who wants to study at their own pace for whatever reason, said UW System President Tommy Thompson. These new certificate programs will make additional learning available to more people.

Certificate requirements vary from four to six courses. It is possible to earn a certificate within one year; however, students may decide to take longer to complete a certificate based on work and life commitments. Like other UW Extended Campus programs, students pay the same tuition whether they live in Wisconsin or out of state.

The certificates provide skills training for professionals seeking career advancement in health care, technology, and business occupations. According to the U.S. Bureau of Labor Statistics, employment of medical and health services managers is projected to grow 32 percent from 2019 to 2029, and employment of computer and information research scientists is projected to grow 15 percent from 2019 to 2029, much faster than the average for all occupations. Coursework completed in the certificate programs offer an optional pathway to bachelors and masters degrees offered through UW Extended Campus.

Programs managed by UW Extended Campus are designed for working adults and professionally oriented students, said Aaron Brower, executive director of UW Extended Campus. What I love about these programs is that they meet students where they arethey engage students in learning that fully connects their lives to the world.

The new certificates join a growing catalog of flexible, online degree and certificate programs designed for adult learners offered in collaboration with UW Extended Campus and UW System campus partners. Students with a variety of work, education, and life experience have found success in UW Extended Campus programs.

Prospective students seeking more information about the UW Extended Campus certificates are encouraged to visit the website, uwex.wisconsin.edu, call 1-608-262-2011 or 1-877-895-3276, or email learn@uwex.edu.

The University of Wisconsin System is one of the largest and most widely respected public higher education systems in the country. UW Extended Campus partners with all UW System campuses to offer online degrees and certificates, as well as continuing education and lifelong learning opportunities. Through UW Extended Campus, people of Wisconsin and beyond can access university resources and engage in online learning, wherever they live and workfulfilling the promise of the Wisconsin Idea.

Continue reading here:

UW Extended Campus, in partnership with UW System campuses, to launch five new online certificates this fall - University of Wisconsin System

Read More..