Page 3,157«..1020..3,1563,1573,1583,159..3,1703,180..»

Physicists Are Reinventing the Laser – Gizmodo Australia

In the 1950s, when physicists were racing to invent the first laser, they found that the rules of quantum mechanics restricted how pure the colour of their light could be. Since then, physicists and engineers have always built lasers with those restrictions in mind. But new theoretical research from two independent groups of physicists indicates that nature is more lax than previously thought. The findings could lead to improved, more monochromatic lasers for applications such as quantum computing, which the researchers illustrate in two proposed laser designs.

The work overthrows 60 years of understanding about what limits lasers, said physicist Howard Wiseman of Griffith University in Australia, whose group published their work in Nature Physics last October.

A laser, in essence, is a megaphone for light. The word itself, originally an acronym, reflects this function: light amplification by stimulated emission of radiation. Send in a photon of the right frequency, and the laser makes copies of it, multiplying the original signal.

These photon clones exit the laser in sync with each other, travelling in phase, as the experts call it. You can think of it this way: Each photon is a wave, with its crest and trough lined up with its neighbour, marching together in lock-step out of the laser. This contrasts with most other light sources, such as your reading lamp or even the Sun, which both emit photons that disperse randomly.

The longer photons stay in sync, the more monochromatic the light. The colour of a light source corresponds to the wavelength of its photons, with green light spanning roughly the 500 to 550 nanometre range, for example. For multiple photons to stay in sync a long time, their wavelengths must line up very precisely meaning the photons need to be as close to one colour as possible.

This synchrony of laser photons, known as temporal coherence, is one of the devices most useful properties. Many technologies make use of laser lights ridiculously fast and steady rhythm, its wave pattern repeatingat hundreds of trillions of times a second for visible lasers. For example, this property underpins the worlds most precise timekeeping devices, known as optical lattice clocks.

But photons gradually lose sync after they leave the laser; how long they stick together is known as the lasers coherence time. In 1958, physicists Arthur Schawlow and Charles Townes estimated the coherence time of a perfect laser. (This is a common physicist design strategy: Consider the most ideal version of something before building a far more lacking real-world device.) They found an equation thought to represent an ultimate coherence time limit for lasers, set by the laws of physics. Physicists refer to this as the Schawlow-Townes limit.

The two new papers find that the Schawlow-Townes limit is not the ultimate limit. In principle, it should be possible to build lasers which are significantly more coherent, said physicist David Pekker of the University of Pittsburgh, who led the other group. Their paper, currently under peer review, is posted as a pre-print on arXiv.

Both groups argue that the Schawlow-Townes limit rests on assumptions about the laser that are no longer true. Schawlow and Townes basically thought of the laser as a hollow box, in which photons multiply and leave at a rate proportional to the amount of light inside the box. Put another way, the photons flow out of Schawlow and Towness laser like water drains from a hole in a barrel. Water flows faster when the barrel is fuller, and vice versa.

But Wiseman and Pekker both found that if you place a valve on the laser to control the rate of the photon flow, you can actually make a laser coherent for much longer than the Schawlow-Townes limit. Wisemans paper takes this a step further. Allowing for these photon-controlling valves, his team re-estimates the coherence time limit for the perfect laser. We show that ours is the ultimate quantum limit, said Wiseman, meaning the true physical limit dictated by quantum mechanics.

Schawlow and Towness estimate, while not the fundamental restriction on lasers physicists originally thought, was reasonable for its time, said Wiseman. No one had any means for precisely controlling the flow of light out of a laser in the way that Wiseman and Pekker propose. But todays lasers are a different story. Physicists can now control light with a multitude of devices developed for the budding quantum computing industry.

Pekker has teamed up with physicist Michael Hatridge, also of the University of Pittsburgh, to bring the new laser design to life. Hatridges expertise involves building circuits out of superconducting wire for storing and controlling microwave-frequency photons. They plan to build a microwave-emitting laser known as a maser for programming qubits inside a quantum computer made of superconducting circuits. Though building this new maser will take years of work and troubleshooting, Hatridge said they have all the tools and knowledge to make it possible. Thats why were excited about it, because its just another engineering project, Hatridge said.

Wiseman is looking for collaborators to build his design, also a maser. I would really, really like this to happen, but I recognise its a long-term goal, he said.

The designs are completely feasible, said physicist Steven Touzard of the National University of Singapore, who was not involved in either of the new papers. However, Pekker and Wisemans work may not directly lead to useful commercial lasers, according to Touzard. He pointed out that builders of lasers do not commonly use the Schawlow-Townes limit to direct their designs. So overturning the limit could be more of a theoretical advancement than an engineering one, he said.

Curiously, the two new designs also contradict another conventional wisdom about lasers. The devices do not produce light via so-called stimulated emission, which makes up the s and e in the acronym laser. Stimulated emission is a type of interaction between light and matter, in which a photon impinges upon an atom and stimulates the atom to emit an identical photon. If we imagine a laser as a box of light, as before, a laser that amplifies light using stimulated emission multiplies the signal proportionally to the amount of light already in the box. Another type of laser invented in 2012, known as a superradiant laser, also does not use stimulated emission to amplify light, according to Touzard.

The idea of a laser has outgrown its name. It is no longer exclusively light amplification by stimulated emission of radiation.

Of course, many such examples exist in the English language. The change in meaning is known as semantic shift and is common wherever new technology is involved, according to linguist Micha Elsner of the Ohio State University. Ships still sail across the ocean, even when no actual sails are involved, Elsner said in an email. You can still dial someones number even though your phone doesnt have a dial.

Even though a words etymology its origin certainly gives it a starting point, it does not determine its destiny forever going forward, linguist Brian Joseph of the Ohio State University said in an email.

As Cold War goals transitioned into 21st century ones, lasers have evolved, too. Theyve been around long enough to integrate into nearly all aspects of modern life: They can correct human vision, read our grocery barcodes, etch computer chips, transmit video files from the Moon, help steer self-driving cars, and set the mood at psychedelic ragers. And now, the laser could be reinvented again. A 60-year-old device remains a symbol of a sci-fi future.

Continued here:
Physicists Are Reinventing the Laser - Gizmodo Australia

Read More..

How the move to Edge-based AI could build trust for the future – TechRadar

The vast promise of AI is well publicized, with the potential to innovate almost every industry and make a positive difference in peoples lives. However, its risks are equally familiar, which is why the European Commission launched its 2020 White Paper on AI, outlining the importance of building an ecosystem of trust around this increasingly advanced technology.

Policy and regulation are set to play a key part in delivering trustworthy AI, though any framework will require a high degree of nuance for this to be achieved while innovation is still encouraged.

Arguably the most pressing discussion that needs to be had is with regards to the two different models of AI Edge-based and Cloud-based which offer completely different ways of deploying this data-driven digital technology. It is a necessity that the contrasting risks and benefits of each approach are understood when developing future regulation.

The road to achieving trustworthy AI may be a long one, but Edge-based AI use cases illustrate how it could be the key to reaching the goals set out by the European Commission. From meeting regulatory standards, improving privacy and security, and offering a better user experience, we tackle the benefits, and dangers, of AI living on the Edge.

First, we must tackle what each deployment actually is. The clue is in the name, but for those unaware, Cloud-based AI is AI that is deployed in the cloud. This means that devices with AI applications that capture data (say, a voice app using a microphone to capture sound) will send this data to large, remote servers over a complex IT infrastructure. Once it reaches these server farms, the data is processed, decisions may be taken, and the results are returned to the device and initiating application.

Edge-based AI may have a slightly more abstract billing, but its deployment is similarly straightforward. Instead of sending data to remote servers for processing, Edge-based AI applications keep all the data on the device, using the AI model that resides on the processing unit (living on the Edge of the device). The processed data (which has never left the device, be it a tablet, connected vehicle or smart fridge) is then consumed by the initiating application.

In some optional cases, the metadata gathered in Edge-based AI deployments can also be sent to servers or the Cloud (typically this contains basic information about the status of the device), but will not impact the decisions made by the AI that lives on the Edge of the device.

The question is why does this matter, and the answer is simple: Edge-based models can solve many of the most challenging problems facing the Cloud-based alternative and help deliver trustworthy AI.

Many of the problems associated with Cloud-based AI deployments would directly impact European organizations ability to develop an ecosystem of trust. The White Paper describes opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes as the risks that are hampering AI and its adoption.

A number of these issues could well be found in Cloud-based AI concerns around privacy but are addressed in Edge-based deployments. For example, privacy is ensured with Edge-based AI, with neither the identity of the user nor the tasks they are carrying out disclosed to the Cloud server. What happens on the Edge device, stays on the Edge device.

As for data security, Edge (or personal) devices are typically more difficult to breach than Cloud-based servers, ensuring users private data is at far lower risk of being accessed and used by cyber criminals.

Another benefit of the Edge is energy consumption. Server farms, which are used for Cloud-based deployments, are power-hungry behemoths meaning that Edge-based AI, which only relies upon single devices, is far more conservative in its energy use, even as end-user performance is boosted through improved latency.

Latency refers to the time it takes for the data to travel from the capturing device to where it is processed, and back. If it is only travelling to the Edge of the device, rather than remote servers, latency will be reduced, and the AIs decisions can be made in real time.

Edge-based AIs proximity to the user is key to its power, and impacts the vital and timely issue of trust. For consumers, it is far easier to trust their own device to handle sensitive data and process personal requests than it is a Cloud infrastructure.

If an ecosystem of trust is to be built, this must be a vital consideration when designing a regulatory framework for AI.

With Edge computing solving many problems around data privacy, cybersecurity, power consumption, scalability, and latency, it would be foolhardy for regulatory bodies to neglect to define it from its Cloud-based counterpart.

This is not to say that Edge-based AI is perfect and that it provides unbreachable protection Edge deployments can be reverse engineered by savvy cyber criminals which can result in security complications, which will have to be addressed through model encryption and on-the-fly inference.

However, its many benefits may hold the key to delivering trustworthy AI. A framework that takes these into consideration will empower industry leaders to innovate, whether it is through automotive cars, wearables, or childrens toys, without having their wings clipped through over-regulation.

Continued here:
How the move to Edge-based AI could build trust for the future - TechRadar

Read More..

The Role of Connected Manufacturing in Pandemic Recovery – IndustryWeek

Working in manufacturing has been an interesting time in the last 12 months, with a rapid and yet, mostly undiscussed transition to working from home. With that has come a lot of really clever solutions that have been adopted by industry professionals to minimize negative impact of the pandemic on their output, with adoption of digital communications, file transfer systems, and cloud servers replacing local ones. A recent report by McKinsey states that a more digitally connected workforce such as the workforce weve been forced to become by the pandemic stands to unlock more than $100 billion in value for the manufacturing industry alone. This opportunity exists in areas such as productivity boosts of 20 to 30 percent in collaboration-intensive work processes like root cause investigation, supplier management, and maintenance. However, while an adoption of certain new digital ways of working is increasingly the new normal, the transition has not been a completely smooth one.

A generatively designed steering wheel, designed and manufactured in less than six months for Volkswagens concept retro future Microbus. (Credit: Volkswagen)

The downfalls of on-premises solutions

Some manufacturers have learned from the pandemic, so are better prepared for future emergencies or disruptions: lessons learned include the vulnerabilities of being tied to on-premise data solutions, as they can be unexpectedly challenging in situations of uncertainty. Instead, manufacturers who have weathered the storm better have leaned into digital tools that enable remote work and seamless collaboration, wherever staff may be. Being dependent upon a server that cannot be accessed or maintained due to a pandemic makes that resources unreliable in a context of deadlines, worsened by software that is locked into now-inaccessible workstations.

Autodesk Fusion 360 has been specifically built to support more collaborative, multi-skilled and distributed workforces. It can be downloaded and installed anywhere you have an internet connection, offers peak performance from a laptop, and does not require laborious license verification. UK mountain bike parts manufacturer PEMBREE established their business earlier on in the pandemic using Fusion 360:

"There were a lot of late nights and challenges to overcome because we were launching during the COVID-19 lockdown. But when you step back and look at what weve achieved, its fantastic. We couldnt have done it without Autodesk Fusion 360.Phil Law, Founder, PEMBREE.

PEMBREE Manufactured Pedal (Credit: PEMBREE)

Creating a connected workforce

Leaders in the manufacturing industry are paying attention to, and moving quickly in addressing, this new way of working. Cloud-powered technologies that ease collaboration, do not care where people are located and provide infinite computer power on demand offer a seamless flow of productivity, irrespective of historical requirements like geography. As we turn the corner and see the release of a vaccine, were also recognizing that some changes brought by the pandemic to how we work will remain. Research shows that the number of permanent remote workers is set to double in 2021, to more than a third of the total global workforce. With the combination of a global workforce increasingly working from home, not only will cloud technologies be adopted in everyday work, progressive companies will benefit from global talent sometimes both less costly and more skilled than the talent available locally being available to any company willing to recruit from outside its city walls, county, state or national borders.

We at Autodesk embrace the concept that manufacturers solutions must enable seamless distributed work, provide all members of a team transparency into project status and give extreme computing capabilities to all employees; Fusion 360 is a platform that delivers exactly this.

You can now use digital fabrication to work on prototypes from home

Tackling the skills gap

The events of 2020 may have been unforeseen, but manufacturing leaders will take this as an opportunity to tackle the problems they exposed. One of these problems is lack of technical know-how parity across a team can be exacerbated by working remotely. Teams are often comprised of differing skill levels, and in a digital-dominant workplace a drop in skill sharing and output can result, necessitating sessions to discuss workflow sand causing production delays.

Manufacturers can help bridge the skills gap by choosing processes and tools that are more accessible and easier to learn namely by investing in collaboration tools that seamlessly work across different manufacturing machines and software. It is redundant to learn how to do something more than once because of a software limitation.

It may seem a lofty claim, but at Autodesk we envision Fusion 360 as ultimately being the keystone to every design and manufacturing challenge. We understand that no two design projects or manufactured products are the same. Across a range of projects, teams and individuals may find themselves needing deep electronics integration, testing and validation, or be the tip of the spear pushing design concepts further. Given manufacturing is inclusive of all these things, we believe it inefficient and risky to work in a fragmented fashion. Fusion 360 not only addresses the needs of design and manufacturing from every facet, it adopts modern working principles, eliminates the barrier of entry for data management, encourages and enables collaboration, and provides change management tools that make working with anyone, anywhere, as easy as possible.

Of course, we recognize adopting any new tool, especially under duress in a shifting emergency situation, is difficultand risky. To mitigate this risk, Autodesk offer multiples ways to get familiar, comfortable and proficient with our products. Whether you prefer video, text, guided lessons, or interactive webinars, we have you covered. Well get you up to speed with minimal disruption. Fusion 360 can not only replace legacy manufacturing software investments, but it plays nicely with native file formats from most major vendors no intermediary file conversion necessary so don't worry about whether that STEP or IGES is accurate anymore!

Whill, a modular wheelchair made with Fusion 360

Building on our pandemic knowledge

Some time ago, as many other daily operations in the workplace (and at home) adopted them, we recognized, and invested in, cloud-based technologies becoming the new normal in manufacturing and design. With kids working from home in Google Classroom, and everyone binging Queenss Gambit streamed on Netflix, why are we still counting on the CAD equivalent of DVDs in the mail?

Empowering the entire workforce with interoperable and accessible digital tools will help improve productivity, add value to the industry, and lay the foundations for next-gen technologies such as additive manufacturing and generative design. Ultimately, this empowerment will pave the way for new successes as manufacturing emerges from the pandemic, having learned the lessons the pandemic had to teach us.

Visitwww.autodesk.com/fusion-360to learn more!

Sponsored by:

See more here:
The Role of Connected Manufacturing in Pandemic Recovery - IndustryWeek

Read More..

Hospitals Save Costs and Gain Dynamic Service in the Cloud – HealthTech Magazine

Cloud Resources Increase Healthcare Efficiencies

Pennsylvania-based Geisinger is in the planning stage of a four-year cloud migration, and a desire to increase efficiency is one of the main drivers behind the migration, says CIO John Kravitz.

A lot of applications that we have running on-premises are inefficiently written, and that requires a lot of capacity to be allocated to those apps, so we have a lot of compute and storage dedicated to an app that never gets used, Kravitz says. Some of these may use 10 to 15 percent of that capacity, even though its never getting touched, so its a wasted resource.

Avoiding a lift and shift approach to cloud migration is key, he says, because otherwise the organization is just transferring those inefficiencies to the cloud.

We are looking at every application to see if it has cloud enablement or Software as a Service, so you can ratchet down the need or wind it up slowly as the need for those resources presents itself, he says. Most of the public clouds give you that scale-up capability when you need it.

Thats a point of view shared by B.J. Moore, executive vice president and CIO at health system Providence, which serves patients in seven states. The cloud provides an opportunity to become smaller and more dynamic than the organization could be on-premises solutions, he says.

We can retire half of our apps, and were finding a lot of these apps can be consolidated, so were reducing our estate of apps massively and retiring thousands of servers in the process, Moore says. Its about changing your practices to be in a cloud world, which is a just-in-time, elastic environment.

The cloud also supports simplified, centralized data access, says Moore, pointing out that a cloud-based data lake provides storage that can be expanded, along with compute performance that can be ramped up when needed.

We use advanced AI models to predict COVID outbreaks, and if we didnt have the cloud, we wouldnt have been able to do this modeling, he says. All the waves of innovation are going to be in the cloud, and if you want to compete, you have to be in the cloud.

RELATED: Avoid three common mistakes in cloud migration.

Moving to the cloud also will bring benefits to Geisingers virtual care delivery services, including videoconferencing capabilities, says Kravitz. In addition, it will facilitate easier access to data for remote office workers, although he stresses that data security must remain a paramount concern.

The cloud delivered a similar benefit to Providence, where it helped the organization meet a rapid rise in demand for virtual visits immediately after COVID-related closures, says Moore.

The beauty of the cloud is that we just added licenses, whereas if it were on-premises, we would have had to wait months to add and install more servers, he says. In a crisis like this, that elasticity is invaluable.

Kravitz also points to benefits for Geisingers research initiatives, including work with genomic sequencing. Moving data from New York City-based biotechnology firm Regeneron to the Amazon Web Services cloud supports mass compute on that data for patient care purposes and greater flexibility in the use of resources.

It would cost us millions to have that hardware hardly being used on-premises, says Kravitz. It makes it more cost effective for us to just delete that data when we are done with it.

View post:
Hospitals Save Costs and Gain Dynamic Service in the Cloud - HealthTech Magazine

Read More..

What 2021 holds for the web and cloud hosting industry – ValueWalk

The limitations of growth on Web hosting is merely macro-economic, with top Web hosting providers deploying various innovative ways of attracting new customers and increasing their share of the market. The top market players now offer free website builders and logo makers in a bid to capture market share in one of the most competitive but profitable marketplaces right now.

You may have heard that It is all in the cloud: Not quite. actually, many data centers are underwater. Web hosting is a service that manages internet servers, web hosting providers facilitate this service by allowing corporations and private individuals to host websites and other content on the internet using their servers. Web hosting is the core function of most providers, yet the most successful market players offer additional resources, such as free or inexpensive website builders or logo designers to attract new customers and increase their market share. One thing all participants in the market knows, is that speed matters a lot and they are all geared up to compete on this metric.

Clearly with Amazon, Microsoft and Google Cloud dominating this space, smaller brands had to be innovative to capture a share of the market hence the motivation to launch tools such as web builders, logo makers etc.

Axon Capital was up more than 60% for the first 11 months of 2020 after making some changes to deal with the year's challenges. In his delayed third-quarter letter to investors, which was reviewed by ValueWalk, Axon's Dinakar Singh noted that the year was not only "incredibly stressful" but also "successful." Q4 2020 hedge fund Read More

Bluehost is one the biggest hosting providers worldwide and continues to grow in popularity due to its host of added features and competitive prices. Bluehost offers extremely affordable hosting plans that cover almost every aspect of their customers hosting needs. BlueHost also includes the free Weebly website builder in their most basic plan. It is a very basic website builder with no templates, but customers can easily use it to create websites of up to six pages.

GoDaddy offers a wide-ranging list of web hosting plans suited to all budgets and website needs. They are one of the fastest hosting providers with exceptional page loading speed and 99,97% uptime. They also offer their own website builder, GoCentral, an all-in-one website builder with integrated marketing tools and hundreds of designer-made templates.

Weebly is considered one of the more powerful free web hosting providers with exceptional site speed and reliability. Weeblys website builder is one of the easiest to use, however it is limited in its range of add-on features and customization. Yet, unlike other free hosting platforms with website builders, Weeblys customers are not plagued by recurring ads.

Squarespace has a range of four hosting plans all offering unlimited bandwidth and storage. Sqaurespaces website builder is slightly more complex than your average drag-and-drop website builder, but customers will not find better quality templates or in-house features in other providers. Squarespaces website builder is not free but they do offer a free 14-day trial for customers to try out.

Wix is also one of the biggest hosting providers with over 1,1 million websites built on their website builder. Wixs most basic hosting plan includes the use of their free website builder allowing customers to create websites using all their templates, customers will however experience recurring ads. To avoid the ads, customers will have to opt for Wixs Combo or Business VIP plan, both of which carry monthly chargers.

Web hosting providers often use inexpensive and easy-to-use website builders to attract new customers. A vast majority of customers are people who originally only wanted to build a website and then found themselves needing a web hosting provider to host their website. The total number of websites a web hosting provider can power, sourced directly from their website builder can have a dramatic impact on their total market share.

Wix, Squarespace, GoDaddy and Weebly, in that order, are the top website builders with the most powered websites and although Squarespace, Wix and Weebly also have the most powered websites in the Top 1m, Webflow (2,67%), Bubble (2,36%) and Tilda (2,36%) have the highest percentage of websites, in relation to the total number of websites they host, on the list of Top 1m websites.

Regardless of the impressive standing of providers like Squarespace, Wix and Weebly, the website builders with the highest expected growth rate is Strikingly, expected to grow by 12%, Carrd by 11% and Webflow by 11%, and interestingly both Strikingly and Webflow offer web hosting plans.

TRUiC analysts have produced a report on the best vendors to use when you want to build a website. Free and inexpensive website builders and logo designers are the perfect lure for web hosting providers in search of new customers. Customers are lured into building their website on a free website builder without realizing they will still require a hosting provider to host their website. There are countless website builders for people to build a website for free. Customers in need of easy-to-use website builders is simply another customer pool from which the top hosting providers draw new customers.

What are the implications for the web and cloud hosting industry?

The growing demand for easy-to-use website builders has significantly impacted the web and cloud hosting industry as well as how hosting providers are able to improve their share of the web hosting market by tapping into the website building market.

Web hosting providers are one by one adding website builders and logo designers to their range of hosting services. This has proven to work well for hosting providers like GoDaddy, Wix, Weebly and Squarespace.

Will this enable these players to square up against Microsoft, Amazon and Google? Judging by their recent growth on the S&P500 (For example Wix), they may be onto the right path.

Read the rest here:
What 2021 holds for the web and cloud hosting industry - ValueWalk

Read More..

We need rules for facial recognition, and we need them now – Los Angeles Times

The powers that be at UCLA thought it was a good idea at the time using state-of-the-art technology to scan students faces for gaining access to campus buildings. Students thought otherwise.

The implementation of facial recognition technology would present a major breach of students privacy and make students feel unsafe on a campus they are supposed to call home, the Daily Bruin said in an editorial last year.

UCLA dropped the facial recognition plan a few weeks later. We have determined that the potential benefits are limited and are vastly outweighed by the concerns of our campus community, officials declared.

I recalled that fracas after the Federal Trade Commission announced the other day that it had reached a settlement with a San Francisco company called Everalbum, which offered online storage of photos and videos.

The company, via its Ever app, scanned millions of facial images without customers knowledge and used the data to develop facial recognition software for corporate clients, the FTC said.

Everalbum also promised users it would delete their photos and videos from its cloud servers if they closed their account. However, the company retained them indefinitely, the agency said.

Using facial recognition, companies can turn photos of your loved ones into sensitive biometric data, said Andrew Smith, director of the FTCs Bureau of Consumer Protection.

Ensuring that companies keep their promises to customers about how they use and handle biometric data will continue to be a high priority for the FTC, he said.

Be that as it may, theres a lot of money to be made with such cutting-edge technology. Experts tell me consumers need to be vigilant about privacy violations as some of the biggest names in the tech world including Google, Amazon, Facebook and Apple pursue advances in the field.

Since there arent federal laws on facial recognition, it seems pretty likely that there are other companies using this invasive technology without users knowledge or consent, said Caitlin Seeley George, campaign director for the digital rights group Fight for the Future.

She called Everalbums alleged practices yet another example of how corporations are abusing facial recognition, posing as much harm to peoples privacy as government and law enforcement use.

Facial recognition technology took center stage after the Jan. 6 riot at the Capitol. Law enforcement agencies nationwide have been using facial recognition systems to identify participants from photos and videos posted by the rioters.

Thats creepy, to be sure, but it strikes me as a legitimate use of such technology. Every rioter in the building was breaking the law and many were foolishly bragging about it on social media. These people deserve their comeuppance.

In the absence of clear rules, however, some of the big dogs in the tech world have adopted go-slow approaches to facial recognition, at least as far as law enforcement is concerned.

Microsoft said last year that it wouldnt sell its facial recognition software to police departments until the federal government regulates such systems. Amazon announced a one-year moratorium on allowing police forces to use its facial recognition technology.

But law enforcement is just one part of the equation. Theres also the growing trend of businesses using facial recognition to identify consumers.

Consumers need to know that while facial recognition technology seems benign, it is slowly normalizing surveillance and eroding our privacy, said Shobita Parthasarathy, a professor of public policy at the University of Michigan.

Not least among the potential issues, researchers at MIT and the University of Toronto found that Amazons facial recognition tends to misidentify women with darker skin, illustrating a troubling racial and gender bias.

Then theres the matter of whether people are being identified and sorted by businesses without their permission.

Facebook agreed to pay $550 million last year to settle a class-action lawsuit alleging the company violated an Illinois privacy law with its facial recognition activities.

The Everalbum case illustrates how facial recognition is spreading like poison ivy in the business world, with at least some companies quietly exploiting the technology for questionable purposes.

Between September 2017 and August 2019, Everalbum combined millions of facial images that it extracted from Ever users photos with facial images that Everalbum obtained from publicly available datasets, the FTC said in its complaint.

This vast store of images was then used by the company to develop sweeping facial recognition capabilities that could be sold to other companies, it said.

Everalbum shut down its Ever app last August and rebranded the company as Paravision AI. The companys website says it continues to sell a wide range of face recognition applications.

Paravision has no plans to run a consumer business moving forward, a company spokesman told me, asking that his name be withheld even though hes, you know, a spokesman.

He said Paravisions current facial recognition technology does not use any Ever users data.

Emily Hand, a professor of computer science and engineering at the University of Nevada, Reno, said facial recognition data is a highly sought-after resource for many businesses. Its one more way of knowing who you are and how you behave.

Hand said that for every company that gets in trouble, theres 10 or more that didnt get caught.

Seeley George at Fight for the Future said, Congress needs to act now to ban facial recognition, and should absolutely stay away from industry-friendly regulations that could speed up adoption of the technology and make it even more pervasive.

Shes not alone in that sentiment. Amnesty International similarly called this week for a global ban on facial recognition systems.

I doubt that will happen. With the biggest names in Silicon Valley heavily invested in this technology, its not going away. Whats needed are clear rules for how such data can be collected and used, especially by the private sector.

Any company employing facial recognition technology needs to prominently disclose its practices and give consumers the ability to easily opt out. Better still, companies should have to ask our permission before scanning and storing our faces.

Todays facial recognition technology is fundamentally flawed and reinforces harmful biases, Rohit Chopra, then an FTC commissioner, said after the Everalbum settlement was announced.

With the tsunami of data being collected on individuals, we need all hands on deck to keep these companies in check, he said.

Chopra has since been appointed by President Biden to serve as director of the Consumer Financial Protection Bureau.

We can all recognize that as a positive step.

Here is the original post:
We need rules for facial recognition, and we need them now - Los Angeles Times

Read More..

January 2021 Report on Impact of 5G and Cloud on Transformation of Telecom Networks | – Lightcounting Market Research

January 2021 Report on Impact of 5G and Cloud on Transformation of Telecom Networks January 2021

$5,000.00

This LightCounting report focuses on the changing marketplace facing the communications service providers (CSPs). In particular, the impact of adopting cloud practices and the associated challenges, and the advent of 5G and other technologies, on the CSPs networking infrastructure.

Embracing cloud technologies represents a key inflection point for the CSPs as they change how they build and operate their networks. Instead of working with select systems vendors for individual service introductions and network upgrades, the CSPs are turning to software, network disaggregation and even white-box hardware to make their networks open and software-defined. The CSPs continue to work closely with established systems vendors as they transform their networks but by adopting open disaggregated designs, they are engaging a wider community of suppliers. With networks based on disaggregated designs and software, the CSPs seek to more easily update their networks, switching in and out vendors as required. Separating the software from the hardware and opening up designs also promise the faster introduction of services, a long-sought goal of the operators and key to improving their revenues and innovativeness. But such a transformation is challenging and is a major upheaval for the CSPs and for the telecom industry in general.

The industry has come a long way since the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization (NVF) White Paper first articulated the network transformation vision in 2012. And much remains before the goal of network transformation is achieved. Yet there are already glimpses as to what is becoming possible for the CSPs. One is the emergence of Rakuten Mobile demonstrating an Open RAN-based mobile network built using a disaggregated design and code-based network functions, while Deutsche Telekom has gone live with its first Access 4.0 disaggregated broadband access platform based on software and servers - providing fiber-to-the-home (FTTH) services to customers in Stuttgart, Germany.

What is clear is that the industry is on a new path: the leading CSPs have consigned to history the traditional way of building networks based on proprietary platforms from individual vendors. The CSPs are more hands-on, defining their needs to the vendor community working through open networking organizations, and using incremental steps to achieve their goals. Another consequence of disaggregation and the software-defined network is the internet content providers (ICPs) growing role in telecoms. Amazon Web Services, Microsoft Azure and Google Cloud all recognize the opportunity telecoms represents and are partnering with the CSPs as both expand their cloud footprints at the network edge.

To view the sample database: click here

Here is the original post:
January 2021 Report on Impact of 5G and Cloud on Transformation of Telecom Networks | - Lightcounting Market Research

Read More..

The push-to-talk ecosystem: Cellular, Wi-Fi, and unified platforms – Security Magazine

The push-to-talk ecosystem: Cellular, Wi-Fi, and unified platforms | 2021-01-26 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the original here:
The push-to-talk ecosystem: Cellular, Wi-Fi, and unified platforms - Security Magazine

Read More..

What are the things to consider before moving to the cloud? – BetaNews

At this point, even the most stubborn holdouts have to admit that the cloud offers unbeatable levels of performance, stability, convenience, and security. Working through the COVID-19 pandemic has made that abundantly clear. Keeping key files in local storage quickly loses its luster when youre unable to access that local storage due to travel and workplace restrictions.

If youre in that position, then, youre likely in the phase between accepting what you need to do (move your files and processes to the cloud) and actually doing it. And while there is a sense of urgency to the task ahead of you, its entirely reasonable to think things through before you proceed. Its a massive change, after all, and you want it to be as smooth as possible.

To help you navigate this process, were going to look at some of the key things you need to consider before you move to the cloud. Once youve gone through everything and figured out the specifics, youll be ready to move ahead. Lets get started.

What platforms and tools do you want to use?

Talk about the cloud can give the impression that its a single all-encompassing system, but that obviously isnt the case. Moving to the cloud just involves passing storage and processing to servers and services accessed through the internet, meaning there are so many viable routes you can take. If you want the best results, though, youll need to choose extremely carefully.

If you can budget for it, consulting a cloud solution distributor (intY being a great example) is going to be a huge boon here, as they can advise you regarding the most popular options on the market and suggest a cost-effective lineup that neatly suits your requirements. Going to the experts is always preferable, particularly for a project like this.

If you cant budget for that, then youll need to do extensive research to see which hosting solutions and applications can deliver what you need. This shouldnt be too onerous a task since the internet is full of free guides, but remember to check trustworthy sources and find articles that are up-to-date.

When can you accommodate the necessary downtime?

Moving to the cloud isnt something you can breeze through during a lunch break. Even when you have everything queued up, itll still take time to ensure that everything goes smoothly: youll need to check, double-check, and triple-check to confirm success. If you outsource the move, youll likely be given a set amount of downtime to accept -- and it could be far more than youd like (migrations can take a remarkably long time, per TechRepublic).

Before you begin, you need to work out when you can fit that downtime in. If you have a hectic period of business coming up, it wont exactly be ideal for your systems to go down while youre trying to get things done -- so what time would be suitable? You could line up some training days to occupy your resources while the move gets wrapped up. Thats a solid option, though there are others. The important thing is that you make a decision.

How much essential data do you have to transfer?

The average house move doesnt bring everything along for the ride, because there are always possessions that arent really worth the effort. The same is true when you move to the cloud. Youll have files that you arent going to need, with data that you might as well have deleted years prior. Due to this, you can optimize your move by sorting through your data.

Once you know what needs to be kept, you can tally things up with reasonable accuracy, and use that to form a stronger idea of how much space youll actually need. Get everyone involved in the estimation process (digital transformation is a group effort). Most cloud platforms have various performance and storage tiers, so taking this into account at an early stage will make it easier to estimate your costs.

What can you do with your old hardware?

Lastly, something that companies often overlook is the hardware theyre moving away from, leading them to simply throw it away. This is a waste. It may be viable to sell it, first of all, but even that isnt strictly necessary. It could most likely be repurposed. Local servers can be used for low-priority project drafts, for instance, or as backups for cloud systems.

The latter option ties into the hybrid approach of using local and cloud storage together (Citrix has a good guide on this). If thats something that interests you, look into it before you make a commitment. You have enough choices to come up with a path that really suits you. Dont make the mistake of rushing.

Image credit: Pxfuel

Stevie Nicks is Digital Editor at Just Another Magazine -- a website that covers the topics you care about. Youll find articles about lifestyle, travel, business and trendson the site each of which is written in each writer's unique style.

See the article here:
What are the things to consider before moving to the cloud? - BetaNews

Read More..

Kathleen Murphy column: Stopping to think of things that run deep – Duluth News Tribune

Its almost startling to think about how often I forget about this particular stop sign. Usually I catch myself just in the nick of time, slamming on the brakes and lurching to a stop. But every so often, when my mind is on the day ahead, or Im just flat-out daydreaming, I find that this stop sign comes to my attention only when I am halfway through the intersection.

This is not a universal problem for me. Id be hard-pressed to think of any other time I missed a stop sign. Ever. Im an attentive driver. I may have a slight case of lead foot, but I am cautious and aware of my surroundings.

The problem lies in this particular stop sign. You see, I grew up in this neighborhood. The streets where I drive today are the very streets on which I learned to drive a car in the first place.

Where I discovered the art of quickly judging the distance between myself and the front bumper of my 79 Chevy Impala (about half a football field length, if I remember correctly).

Where I misjudged many, many times how long it would take to stop a car that was roughly the size and weight of a barge.

Where I realized that growing up on a one-way street and watching cars drive down it in one direction for over a decade did not guarantee I would remember this same fact once behind the wheel.

Where a certain stop sign was not yet in existence, and I drove down the street with nary a tap on the brake. Which, of course, might have led to the addition of the stop sign. Regardless, the stop sign wasnt a part of my childhood experiences, whereas the road itself is. Including, I might add, the stop sign farther down the road, which was already in place during my childhood, and which I have never once ran.

This doesnt end at traffic signs. Unconscious thoughts and habits from my childhood filter their way into my everyday life with regularity. My childhood home had a sink where the hot and cold faucets were on opposite sides of the standard. It was just one single, rogue sink, but it was one I used often, so it wormed its way into my subconscious habits. I was an adult buying my second home before I realized there even was a standard side for the hot and cold. To this day, if a sink isnt labelled in red and blue or H and C, I have to think about it longer than Im comfortable admitting. Sometimes, I still guess wrong.

Because I was an 80s girl, I still use the term arena for any large convention center/stadium complex; still call the middle school years junior high. Both have been incorrect for so long now that I often meet people who have to ask me to clarify what the heck Im talking about. (The DECC was originally called The Arena, for anyone left wondering.)

I am still, to this day, surprised when a package is delivered to my home on a Sunday.

An audio book will always be called a book-on-tape, a stream is a crik, and an ATM, a TYME Machine. That last one has earned me a lot of concerned looks from people who didnt grow up in the Midwest. The most memorable was an elderly East Coast transplant who asked me Are you feeling alright, Dearie? then didnt seem to believe me when I told her it was what we called ATM machines back in the 80s.

Our childhood experiences shape us, for better or for worse. They can create bonds and habits that are not so easily broken, as evidenced by the stop sign I think of as new because it wasnt there 30 years ago. Just give me a few more decades to get used to it, and

Who am I kidding? Ill still occasionally forget about it.

Kathleen Murphy is a freelance writer who lives and works in Duluth. Write to her at kmurphywrites@gmail.com.

See original here:
Kathleen Murphy column: Stopping to think of things that run deep - Duluth News Tribune

Read More..