Page 3,681«..1020..3,6803,6813,6823,683..3,6903,700..»

Video Meeting Apps/ Software to Run Home-Office Effectivel… – United News of Bangladesh

In the wake of the lockdown conditions imposed in a number of countries in the fight against the COVID-19 pandemic, the movements of common people have been restricted within the four-walls of their homes. However, work must go on whether offline or online. Many people are working from home during this quarantine period. Even if the members of an office/business organization cannot join a meeting physically, the authorities can arrange online video conferences to connect people residing in any corner of the planet (with an internet connection).

Read this article to learn about some of the best video meeting apps/software to run home-office successfully during the lockdown period.

Are you looking for a free software/app to arrange video conferences with multiple participants? If yes, check out the Zoom Meetings. The Basic plan allows you to host a video meeting with 100 participants at maximum. A Participant member is an invitee in your scheduled meeting. The participants can join a Zoom meeting (via mobile phone, tablet, or desktop) for free of cost. An invited participant can join a Zoom meeting without having a Zoom account. Each video meeting can continue up to a 40-minute duration.

What is more? Zoom allows one-to-one video meetings without any time constraints. Free users are permitted to share screens with other participants and record the video meeting locally. The popularity of Zoom is skyrocketing in the play store chart its amazing features.

However, if you want no time restriction during a video conference, dive into Zooms Pro plan charging about $14.99 or 1273.33 TK ($1 = Tk 85 approx.) per month per host. A Host can schedule, start and control the settings in a Zoom video meeting. Under this plan, a video meeting with 100 participants can extend as long as 24 hours. Moreover, this plan offers 1GB cloud storage to record and broadcast the meetings.

To facilitate the small and medium-sized business organizations, Zoom offers a Business plan which costs about $19.99 or 1678 TK ($1 = Tk 85 approx.) per month for one host. Under this plan, you can host a zoom meeting with up to 300 participants at a time. In addition to these, choosing Enterprise or Enterprise Plus plan, you can arrange large scale video meetings with 500 or 1000 participants respectively paying $19.99 per month for one host.

Google never stops surprising us with innovative services. You will find the Google Hangouts feature under Googles cloud-based office/business productivity service Google G Suite bundle. The Google Hangouts supports video chat and video calls for an unlimited duration with 10 participants at maximum. With a free Gmail account, you can access the Google G Suite Hangouts.

Recently Google G Suite has been included with the Google Hangouts Meet feature which is a modified version of the former Google Hangouts facility. This cloud-based collaboration and communication tool can facilitate you arranging video conferences with secured data transmission and high-quality video footage. Google Hangouts Meet can be accessed through apps (for iPhone/Android) and any standard web browser (for laptop/desktop).

- G Suite Basic plan costs $6 or 510 TK ($1 = Tk 85 approx.) a month per user. It allows video conferences with 25 people maximum in a session.

-G Suite Business plan costs $12 or 1020 TK ($1 = Tk 85 approx.) a month per user. Under this plan, you can arrange a video meeting up to 50 participants at a time.

-G Suite Enterprise plan costs $25 or 2124 TK ($1 = Tk 85 approx.) a month per user. This elite plan accommodates a video conference of up to 100 participants at once.

Utilizing the Google Drive cloud storage platform, you can record and save the meetings. Google Drive provides 30 GB of storage per user under the G Suite Basic plan. However, subscribers under G Suite Business and G Suite Enterprise accounts can enjoy unlimited storage facilities. Furthermore, subscribing to the G Suite Enterprise plan, you can broadcast/ live-stream your video meeting to up to 100,000 viewers through sharing links.

Choose Skype, when you are looking for free software to arrange video-meeting for an unlimited duration. This software is a windfall for running home-office meetings for a small team. Using Skype, you can arrange a video meeting for a team of 50 participants (including host) or less. To avail upgraded features you can subscribe to the Skype for Business service.

With an active Wi-Fi connection, you can arrange a video conference via Skype (5.0 or upgraded version). To avail of the video conference calling feature, you need to include a group video-calling subscription while registering your Skype account. Each participant requires to sign up and download Skype apps/software in respective mobile/desktop, which can cost some time and storage space. After installing Skype app/software, you can host a video meeting.

So far we have highlighted apps/software for video meetings. To host an audio conference with your teammates, you can visit the FreeConference.com website. It can be a great communication tool for arranging free online meetings during the quarantine. The service of FreeConference.com can also be accessible through mobile apps for iPhone and Android.

- The free plan allows up to 100 call participants via telephone. This tier allows only 5 web participants for video meetings at free of cost. Moreover, you can share screens with participants and use an online whiteboard.

-Paying $9.99 or 849 TK ($1 = Tk 85 approx.) per month, you can register the Starter plan to increase the number of video conferencing participants up to 15. In addition to availing the free plan features, you can save the audio recording of the meeting.

-To host a video meeting with up to 50 participants you can subscribe to the Plus plan which will cost $24.99 or 2123 TK ($1 = Tk 85 approx.) per month. This plan allows you to save both the audio and video recording of the meetings.

See original here:
Video Meeting Apps/ Software to Run Home-Office Effectivel... - United News of Bangladesh

Read More..

iPhone 11 vs. iPhone XR specs: Which iPhone is the better buy? – CNET

With Apple's announcement of the iPhone SE 2020, the company is hoping to court more cost-conscious peoplewith a $399 budget iPhone. But for those who are in the market for an iPhone and have more to spare, Apple also has theiPhone 11, which startsat $699(729, AU$1,199). In addition, the company is offering 2018's iPhone XRfor $599 (629, AU$1,049). With these prices in mind, we compare the iPhone 11 with the iPhone XR to see which phone is the better buy today.

Read more:iPhone camera comparison: iPhone 11 with Deep Fusion vs. iPhone XR

With dual rear cameras, Night Mode shooting and a 12-megapixel front-facing shooter, the iPhone 11 does have more tricks up its sleeves for photos and videos. But compared to the iPhone XR, which already takes fantastic pictures and video, the advantages are slight for the extra $100 you'll pay. In addition, the iPhone 11 has a new U1 chip and Wi-Fi 6 and Gigabit LTE capabilities. You will see these next-gen features in future iPhones and Android devices, so if you want to have a modern phone for the next few years, the iPhone 11 is the best bet. But because Wi-Fi 6 and Gigabit LTE are not fully built-out networks yet, you won't see any immediate advantages -- another reason why we still prefer the iPhone XR. Read our Apple iPhone 11 review.

It might have an older processor and specs, but for the $100 you'll be saving with the iPhone XR, we think it's worth it. It runs iOS 13 just as smoothly as the iPhone 11 on its A12 chipset, and though it doesn't have an ultrawide camera and the updated camera tech, the iPhone XR works just fine for posting photos on Instagram and social media, too. In general, if you're on a tight budget, the iPhone XR is still a great value. We'd suggest either pocketing that $100 for something else or using it to upgrade to the 128GB model and still have $50 left over. Read our Apple iPhone XR review.

The iPhone 11 and XR look nearly identical -- the quickest way to identify them is by the iPhone 11's extra camera and square bump. But besides that and the different color choices (the iPhone 11 comes in lavender, mint green and a pastel yellow, while the Phone XR comes in a canary yellow, coral and blue), there are no obvious differences. They are the same size and weigh the same, and the phones have the same 6.1-inch LCD display with the same resolution and pixel density.

Besides the number of rear cameras they have, the iPhone 11 and XR look similar.

We performed a series of drop tests on the iPhone 11 and XR, and both phones are quite sturdy. We dropped the iPhone 11 on its front and back on smooth concrete from 3 feet, 6 feet and 8 feet, 7 inches. On all drops, the phone's back glass and screen didn't crack. There was a small, cosmetic scratch above the camera lens from 8 feet, but the camera lens was completely fine.

Read more: Best iPhone 11, 11 Pro and 11 Pro Max cases you can get now

When we dropped the iPhone XRback in 2018, we dropped it on a concrete sidewalk. At waist height (about 3 feet), the phone's camera glass cracked, but the back glass survived. The screen also was unscathed. At eye-level (or around 5 feet), the iPhone XR survived a fall on its back with no new damage, but when we dropped it on its screen, the display ultimately cracked. Though that was unfortunate, the fact that the iPhone XR survived waist-high drops was still good.

Both phones are water-resistant, but the iPhone 11 has a higher IP rating of IP68 and can survive underwater at 2 meters (about 6.5 feet) for 30 minutes. The iPhone XR, meanwhile, is rated IP67 and can survive underwater to a depth of 1 meter for 30 minutes on paper. When we took it out on dives, however, the iPhone XR kept on ticking even after 19 minutes underwater at a depth of 5 meters.In our water tests, we were unable to drown the iPhone 11 or iPhone 11 Pro, both phones surviving 30 minutes at a whopping 12 meters underwater. When it comes to water resistance, the iPhone 11 far exceeds both its own promises and the iPhone XR's hardiness.

Both phones have a 12-megapixel camera, but the iPhone 11 has a second, 12-megapixel ultrawide camera and a new Night Mode for low-light photography. All in all, outdoor and well-lit photos on both devices look similarly vibrant, with consistent and bright coloring on both cameras.

However, the iPhone 11 did take notably sharper photos with finer details, especially when pictures were viewed at full resolution. It also did a better job brightening up night time and dim photos with its new mode. The second ultrawide camera is useful when you want to fit more content into each frame or you want to capture more expansive scenes too. Its flash is a tad brighter as well, though we rarely use the feature. Lastly, we like that we can now take portrait photos of pets on the iPhone 11 (the iPhone XR does not recognize nonhuman faces for portraits, which is a drag). The iPhone 11 also makes use of a feature called Deep Fusion, which further improves detail and reduces image noise in photos.

As for video, footage on the Phone 11 has a tad better dynamic range, so lighting and exposure looks more even and natural. Video stabilization is a bit steadier as well on the standard 12-megapixel camera. But the wide-angle camera does not have optical image stabilization, and during 4K video recording you can only switch between cameras when you're filming in 30fps (though both cameras can record 60fps).

On the front, the iPhone 11 has a 12-megapixel camera while the iPhone XR has a 7-megapixel camera. While we do welcome the extra resolution of the iPhone 11, we never really had many gripes with the iPhone XR's front-facing camera, and if you're a casual selfie-taker, the iPhone XR is definitely satisfactory. The iPhone 11's front camera can also pull out for a wider point of view, record 4K video at 60fps and take slow-mo videos. But since we mostly use the front-facing camera for selfies, we didn't really find ourselves using those last two features often.

In this photo you can see the iPhone 11 captured more details in the lines and tubes of the installation.

With the iPhone 11's new Night Mode, the camera is able to handle low light scenes a lot better.

The iPhone 11 brightened up this picture and has better dynamic range.

The iPhone 11 features Apple's newest A13 Bionic processor, while the iPhone XR has 2018's A12 chipset. For the most part, the iPhone 11 and XR both run iOS 13 as smooth and fast as the other. When it comes to day-to-day tasks like browsing the web, firing up the camera or opening apps, we couldn't discern any speed differences. But on paper, the iPhone 11 is unquestionably the faster phone. As you can see by the benchmark results below, the newer iPhone far surpassed the iPhone XR and scored much higher on all the tests we ran.

While we're currently in the middle of changing our battery test methodology for phones, we did test both the iPhone 11 and the iPhone XR with a new streaming video test. Here's what we wrote in our iPhone 11 review: After conducting our formal battery tests and living with the iPhone 11 for over a month, we found the battery life is about the same as the iPhone XR. In our streaming video tests, the iPhone 11 lasted 13 hours and 52 minutes compared with the iPhone XR's time of 12 hours and 7 minutes in the same test. In daily use, the iPhone 11 has been lasting about a day and a half.

The iPhone 11 has a U1 chip for "spatial awareness," according to Apple, and it helps iPhones find other iPhones more precisely when they're in close proximity. The chip also lets you "point your iPhone toward someone else's, and AirDrop will prioritize that device so you can share files faster." Many believe that the U1 chip is actually laying the groundwork for a long-rumored Apple tile tracker.

The iPhone 11.

The iPhone 11 has Wi-Fi 6, which is the next generation of wireless networking. AsCNET Senior Editor Ry Crist puts it, "Wi-Fi 6 supports faster top transfer speeds; lets devices send more information with each individual transmission; lets routers and other access points service more devices at once; helps sensors and other wireless gadgets conserve battery power by scheduling transmissions; and facilitates better, faster performance in dense, crowded environments like airports and stadiums." But you likely won't see any perks in your iPhone 11 now. Wi-Fi 6 was only recently certified in September, and Wi-Fi 6 routers are expensive.

The iPhone 11 has Gigabit LTE, an advanced version of 4G LTE. Gigabit LTE is really fast and can reach peak speeds of 1 gigabit per second, which is about the same speed as a landline internet connection. Apple introduced Gigabit LTE to its iPhone XS and XS Max in 2017, but the iPhone XR was oddly left out of the update. But the iPhone 11 has it, and while you likely won't reach those speeds all the time, your overall speed is going to be faster than an older phone, and you have a lot more clearance when it comes to potential speed.

The iPhone 11 comes in a 256GB model. At $849 (879, AU$1,449) it's more expensive, but if you take a lot of photos and shoot a lot of videos, the extra onboard storage will come in handy -- especially if you don't really use cloud storage. Currently, Apple only sells 64GB and 128GB models of the iPhone XR.

Originally published Oct 10, 2019.

Visit link:
iPhone 11 vs. iPhone XR specs: Which iPhone is the better buy? - CNET

Read More..

Week in review: Cloud migration and cybersecurity, data trending on the dark web, Zoom security – Help Net Security

Heres an overview of some of last weeks most interesting news and articles:

What type of data is trending on the dark web?Fraud guides accounted for nearly half (49%) of the data being sold on the dark web, followed by personal data at 15.6%, according to Terbium Labs.

Cybersecurity in a remote workplace: A joint effortWith so many employees now working from home, business networks have been opened to countless untrusted networks and potentially some unsanctioned devices. Naturally, the question of security arises given the need to ensure that employees are well prepared for the challenges associated with remote work. It also means that businesses must be certain that their security infrastructure is well geared to secure personal and corporate data.

Will Zoom manage to retain security-conscious customers?While Zoom Video Communications is trying to change the publics rightful perception that, at least until a few weeks ago, Zoom security and privacy were low on their list of priorities, some users are already abandoning the ship.

GDPR, CCPA and beyond: How synthetic data can reduce the scope of stringent regulationsAs many organizations are still discovering, compliance is complicated. Stringent regulations, like the GDPR and the CCPA, require multiple steps from numerous departments within an enterprise in order to achieve and maintain compliance.

April 2020 Patch Tuesday: Microsoft fixes three actively exploited vulnerabilitiesFor the April 2020 Patch Tuesday, Adobe plugs 5 flaws and Microsoft 113, three of which are currently being exploited by attackers.

VMware plugs critical flaw in vCenter Server, patch ASAP!VMware has fixed a critical vulnerability (CVE-2020-3952) affecting vCenter Server, which can be exploited to extract highly sensitive information that could be used to compromise vCenter Server or other services which depend on the VMware Directory Service (vmdir) for authentication.

On my mind: Transitioning to third-party cloud servicesThe transition from traditional onsite data colocation to the use of third-party cloud shared tenant services should be on everyones minds. With this growing shift, everyone from individuals to enterprises will continue to fuel threat actors by improperly storing information in the cloud.

Using Cisco IP phones? Fix these critical vulnerabilitiesCisco has released another batch of fixes for a number of its products. Among the vulnerabilities fixed are critical flaws affecting a variety of Cisco IP phones and Cisco UCS Director and Cisco UCS Director Express for Big Data, its unified infrastructure management solutions for data center operations.

You have to consider cybersecurity at all points of a cloud migrationHuman error and complex cloud deployments open the door to a wide range of cyber threats, according to Trend Micro.

Phishing kits: The new bestsellers on the underground marketPhishing kits are the new bestsellers of the underground market, with the number of phishing kit ads on underground forums and their sellers having doubled in 2019 compared to the previous year, Group-IB reveals.

760+ malicious packages found typosquatting on RubyGemsResearchers have discovered over 760 malicious Ruby packages (aka gems) typosquatting on RubyGems, the Ruby communitys gem repository / hosting service.

Small businesses unprepared for remote working, most dont provide cybersecurity trainingThe overnight move to a virtual workplace has increased cybersecurity concerns for small business owners, but many still have not implemented remote working policies to address cybersecurity threats, according to a survey by the Cyber Readiness Institute (CRI).

Zoom in crisis: How to respond and manage product security incidentsZoom is in crisis mode, facing grave and very public concerns regarding the trust in managements commitment for secure products, the respect for user privacy, the honesty of its marketing, and the design decisions that preserve a positive user experience. Managing the crisis will be a major factor in determining Zooms future.

Are we doing enough to protect connected cars?Even though connected cars should meet the highest level of security, safety, and performance, we know this is not always the case. In this interview, Moshe Shlisel, CEO at GuardKnox, discusses todays most pressing issues related to automotive security.

The dangers of assumptions in securityAssuming things is bad for your security posture. You are leaving yourself vulnerable when you assume what you have is what you need, or what you have is working as advertised. You assume you are protected, but are you really?

Application security: Getting it right, from the startWhen you set out to design an application, you want to make sure it behaves as intended. In other words, that it does what you want, when its supposed to, and that it does so consistently.

Information security goes non-binaryFinding security holes in information systems is as old as the first commercially available computer. Back when a computer was something that sat in a computer room, users would try to bypass restrictions, sometimes simply by trying to guess the administrators password.

Office printers: The ticking IT time bomb hiding in plain sightOffice printers dont have to be security threats: with foresight and maintenance theyre very easily threat-proofed. The problem is that system administrators rarely give the humble printer (or scanner, or multifunction printer) much attention.

New lower pricing for CISSP, CCSP and SSCP online instructor-led trainingWhether youre studying for the CISSP, CCSP, SSCP or another industry leading (ISC) certification, (ISC) is here to help you stay on track to certification with our Official Online Instructor-Led training, now at a NEW LOWER PRICE.

US victims lose $13 million from COVID-19-related scamsSuccessful COVID-19-themed fraud attempts perpetrated in the US, since the beginning of the year resulted in a little over $13 million losses, the Federal Trade Commission has shared.

When your laptop is your workspace, the real office never closesWith the COVID-19 pandemic, working from home has moved from a company perk to a hard requirement. Social distancing government mandates have forced complete office closures completely transforming how and where people work. With people working from home and connected to business applications running in the cloud, the notion of an office building representing the company network has vanished overnight.

Shift to work-from-home: Most IT pros worried about cloud securityAs most companies make the rapid shift to work-from-home to stem the spread of COVID-19, a significant percentage of IT and cloud professionals are concerned about maintaining the security of their cloud environments during the transition, according to a survey conducted by Fugue.

New infosec products of the week: April 17, 2020A rundown of the most important infosec products released last week.

Visit link:
Week in review: Cloud migration and cybersecurity, data trending on the dark web, Zoom security - Help Net Security

Read More..

What is edge computing? The benefits of mobile edge computing and 5G – Verizon Communications

Edge computing is based on bringing computing resources closer to users, at the edge of the network. By placing cloud resources physically near the source of the data, instead of in data centers hundreds or thousands of miles away, edge computing can help critical, performance-impacting applications respond more quickly and efficiently.

In todays network architecture, data is typically processed either on our devices, like PCs and smartphones, or in a centralized cloud (apps Gmail, Dropbox and others run in such a cloud). The cloud provides infrastructure, and other powerful capabilities like machine learning, and gives us unparalleled access to software and data, but performance can sometimes be slow or spotty. Edge computing attempts to overcome this performance issue.

Verizon first launched a Mobile Edge Compute service (MEC) with AWS in November. Were calling it Verizon 5G Edge. It utilizes all the benefits of 5G cellular technology to provide even faster access to the applications and data individuals and businesses need.

By the end of 2020, billions of connected devices are estimated to be added to cellular networks, requiring both wide spectrums of cellular frequencies as well as near-real time processing and minimal latency. Verizons 5G Ultra Wideband network should help deliver on those demands. 5G technology is expected to play a key role in increasing the speed at which data travels between two locations, and edge computing will help shorten the distance between the two.

Edge computing brings large servers and data centers, or the cloud, closer to the end user. This will help with situations like augmented reality, where that real-time nature of the data processing is critical.

Without edge computing, data would likely need to travel much further away to a central cloud server, and the resulting latency, or lag time, could be noticeably longer.

Additionally, edge computing is expected to have a positive impact on agriculture, remote healthcare, and manufacturing, among many other applications.

Verizon continues to develop 5G edge technology to revolutionize mobility and connectivity across devices. Learn more about what 5G is and all the implications for the technology of the future.

View post:
What is edge computing? The benefits of mobile edge computing and 5G - Verizon Communications

Read More..

Edge AI Is The Future, Intel And Udacity Are Teaming Up To Train Developers – Forbes

IntelEdgeAIforIoTDevelopers

On April 16, 2020, Intel and Udacity jointly announced their new Intel Edge AI for IoT Developers Nanodegree program to train the developer community in deep learning and computer vision. If you are wondering where AI is headed, now you know, its headed to the edge. Edge computing is the concept of storing data and computing data directly at the location where it is needed. The global edge computing market is forecasted to reach 1.12 trillion dollars by 2023.

Theres a real need for developers worldwide in this new market. Intel and Udacity aim to train 1 million developers.

In the age of innovation, as data continues to grow, theres a real need for data storage and data computation to be located on the device. Privacy, security, and speed are the biggest reasons that the distributed Edge model can work better in certain use-cases. When we are all concerned about our personal data being stored in a cloud server, what if the app can handle our personal data on device instead. With Edge AI, personalization features that we want from the app can be achieved on device. Transferring data over networks and into cloud-based servers allows for latency. At each endpoint, there are security risks involved in the data transfer.

While cloud computing offers unquestionable economies of scale, a distributed computing model is driven by the nature of the data itself. The volume of data will make it difficult or expensive to move due to bandwidth costs or availability. The velocity of data will catalyze more real-time applications that cannot be limited by network latency. And the variety of data will be governed by regulatory, privacy and security constraints.

This is why the Edge AI Software market is forecasted to grow from $355 million in 2018 to 1.12 trillion dollars by 2023.

At the beginning of the AI evolution, we were concerned with crossing over from statistical models to data science, machine learning, and building algorithms that run on the Cloud. Now, software engineers increasingly find that their projects are naturally scoped to include an AI component. You dont have to be a machine learning engineer to know about deep learning or reinforcement learning.

Now, IoT developers who may have been sitting on the sidelines working on software projects that are more feature-based than data-based, will have an opportunity to get involved in the AI evolution. The Intel Edge AI for IoT Developers Nanodegree Program will introduce students to the Intel OpenVINO toolkit that will allow developers to deploy pre-trained deep learning models through a high-level C++ or Python inference engine AP integrated with application logic.

Students will work on Intel's IoT DevCloud to develop, test, and run their workloads on a cluster of the latest Intel hardware and software. Not only will IoT developers learn to apply AI in their applications, but they will also work on performance and other issues that arise with building data-centric applications.

Software engineers, machine learning engineers, data scientists and other technologists whove been working on Cloud-based AI applications now have a new direction to take in their learning path. Learning to develop Edge AI applications can allow a new perspective toward more user-driven application development of AI. With more user-based personalizations directly on the device, business solutions, features, and user data can be viewed from a more user-centric perspective.

This program will be beneficial for all developers who want to be involved in AI-based projects.

This Intel and Udacity collaboration will be the pioneer program that will lay the foundation of training in Edge AI for the next years to come. Just like Udacitys hugely popular machine learning and AI Nanodegree programs, Intel Edge AI for IoT Developers Nanodegree Program will facilitate streamlined training, project-based learning, mentorship and certification that will allow for a quick ramp-up of both AI and IoT development knowledge.

For computer science majors fresh out of school looking for an entry point into the industry, this type of program can offer both opportunities and skills that bridge the gap between education and real-world applications.

At a time when we are all concerned about our job security and prospects due to the coronavirus pandemic, its good to know that there are new paths to explore. If you dont currently work in manufacturing or healthcare, its difficult to envision Edge AI used on factory assembly lines, and in Urgent Care medical imaging equipment.

But, how about imagining the technology in drones, security cameras, robots and self-driving cars?

There are many smart phone apps that we use day to day that can potentially deliver more personalized features through an AI component.

The value and impact of Edge AI applications is showing no limits when it comes to use cases. The ingenuity of companies during this COVID-19 crisis is humbling. Industries including public safety and healthcare, for example are designing and deploying solutions now that leverage AI and computer vision technologies to deliver accurate and real-time insights to help with tracking, testing and treatment. The technology exists, we are only limited by the imagination at scale of the developer community.

This Edge AI evolution is the next generation of AI evolution that will change the way that we interact with our devices and offer better and more secure ways to deploy AI applications.

Students from the program will learn directly from experienced professionals in the Edge AI and IoT field such as Stewart Christie, who has been with Intel for almost 20 years, and is currently the Community Manager of the Internet of Things Developer Program; Archana Iyer, former Research Engineer at Saama; Soham Chatterjee, former Software Innovator at Intel; and Michel Virgo, Senior Curriculum Manager at Udacity.

The Project-based approach that Udacity uses will allows students to learn hands-on skills while interacting and receiving mentorships from experienced professionals. Many developers are familiar with this type of quick ramp up of development skillsets across multiple areas.

Projects in the Nanodegree program include:

If you are reluctant to commit, then try the free course Intel Edge AI Fundamentals free course or take advantage of Udacitys FREE ACCESS for one month.

Continue reading here:
Edge AI Is The Future, Intel And Udacity Are Teaming Up To Train Developers - Forbes

Read More..

AI Could Save the World, If It Doesnt Ruin the Environment First – PCMag

When Mohammad Haft-Javaherian, a student at the Massachusetts Institute of Technology, attended MITs Green AI Hackathon in January, it was out of curiosity to learn about the capabilities of a new supercomputer cluster being showcased at the event. But what he had planned as a one-hour exploration of a cool new server drew him into a three-day competition to create energy-efficient artificial-intelligence programs.

The experience resulted in a revelation for Haft-Javaherian, who researches the use of AI in healthcare: The clusters I use every day to build models with the goal of improving healthcare have carbon footprints, Haft-Javaherian says.

The processors used in the development of artificial intelligence algorithms consume a lot of electricity. And in the past few years, as AI usage has grown, its energy consumption and carbon emissions have become an environmental concern.

I changed my plan and stayed for the whole hackathon to work on my project with a different objective: to improve my models in terms of energy consumption and efficiency, says Haft-Javaherian, who walked away with a $1,000 prize from the hackathon. He now considers carbon emission an important factor when developing new AI systems.

But unlike Haft-Javaherian, many developers and researchers overlook or remain oblivious to the environmental costs of their AI projects. In the age of cloud-computing services, developers can rent online servers with dozens of CPUs and strong graphics processors (GPUs) in a matter of minutes and quickly develop powerful artificial intelligence models. And as their computational needs rise, they can add more processors and GPUs with a few clicks (as long as they can foot the bill), not knowing that with every added processor, theyre contributing to the pollution of our green planet.

The recent surge in AIs power consumption is largely caused by the rise in popularity of deep learning, a branch of artificial-intelligence algorithms that depends on processing vast amounts of data. Modern machine-learning algorithms use deep neural networks, which are very large mathematical models with hundreds of millionsor even billionsof parameters, says Kate Saenko, associate professor at the Department of Computer Science at Boston University and director of the Computer Vision and Learning Group.

These many parameters enable neural networks to solve complicated problems such as classifying images, recognizing faces and voices, and generating coherent and convincing text. But before they can perform these tasks with optimal accuracy, neural networks need to undergo training, which involves tuning their parameters by performing complicated calculations on huge numbers of examples.

To make matters worse, the network does not learn immediately after seeing the training examples once; it must be shown examples many times before its parameters become good enough to achieve optimal accuracy, Saenko says.

All this computation requires a lot of electricity. According to a study by researchers at the University of Massachusetts, Amherst, the electricity consumed during the training of a transformer, a type of deep-learning algorithm, can emit more than 626,000 pounds of carbon dioxidenearly five times the emissions of an average American car. Another study found that AlphaZero, Googles Go- and chess-playing AI system, generated 192,000 pounds of CO2 during training.

To be fair, not all AI systems are this costly. Transformers are used in a fraction of deep-learning models, mostly in advanced natural-language processing systems such as OpenAIs GPT-2 and BERT, which was recently integrated into Googles search engine. And few AI labs have the financial resources to develop and train expensive AI models such as AlphaZero.

Also, after a deep-learning model is trained, using it requires much less power. For a trained network to make predictions, it needs to look at the input data only once, and it is only one example rather than a whole large database. So inference is much cheaper to do computationally, Saenko says.

Many deep-learning models can be deployed on smaller devices after being trained on large servers. Many applications of edge AI now run on mobile devices, drones, laptops, and IoT (Internet of Things) devices. But even small deep-learning models consume a lot of energy compared with other software. And given the expansion of deep-learning applications, the cumulative costs of the compute resources being allocated to training neural networks are developing into a problem.

Were only starting to appreciate how energy-intensive current AI techniques are. If you consider how rapidly AI is growing, you can see that we're heading in an unsustainable direction, saysJohn Cohn, IBM Fellow and research scientist with the MIT-IBM Watson AI Lab, who co-led the Green AI hackathon at MIT.

According to one estimate, by 2030, more than 6 percent of the worlds energy may be consumed by data centers. I don't think it will come to that, though I do think exercises like our hackathon show how creative developers can be when given feedback about the choices theyre making. Their solutions will be far more efficient, Cohn says.

CPUs, GPUs, and cloud servers were not designed for AI work. They have been repurposed for it, as a result, are less efficient than processors that were designed specifically for AI work, says Andrew Feldman, CEO and cofounder of Cerebras Systems. He compares the usage of heavy-duty generic processors for AI to using an 18-wheel-truck to take the kids to soccer practice.

Cerebras is one of a handful of companies that are creating specialized hardware for AI algorithms. Last year, it came out of stealth with the release of the CS-1, a huge processor with 1.2 trillion transistors, 18 gigabytes of on-chip memory, and 400,000 processing cores. Effectively, this allows the CS-1, the largest computer chip ever made, to house an entire deep learning model without the need to communicate with other components.

When building a chip, it is important to note that communication on-chip is fast and low-power, while communication across chips is slow and very power-hungry, Feldman says. By building a very large chip, Cerebras keeps the computation and the communication on a single chip, dramatically reducing overall power consumed. GPUs, on the other hand, cluster many chips together through complex switches. This requires frequent communication off-chip, through switches and back to other chips. This process is slow, inefficient, and very power-hungry.

The CS-1 uses a tenth of the power and space of a rack of GPUs that would provide the equivalent computation power.

Satori, the new supercomputer that IBM built for MIT and showcased at the Green AI hackathon, has also been designed to perform energy-efficient AI training. Satori was recently rated as one of the worlds greenest supercomputers. Satori is equipped to give energy/carbon feedback to users, which makes it an excellent laboratory for improving the carbon footprint both AI hardware and software, says IBMs Cohn.

Cohn also believes that the energy sources used to power AI hardware are just as important. Satori is now housed at the Massachusetts Green High Performance Computing Center (MGHPCC), which is powered almost exclusively by renewable energy.

We recently calculated the cost of a high workload on Satori at MGHPCC compared to the average supercomputer at a data center using the average mix of energy sources. The results are astounding: One year of running the load on Satori would release as much carbon into the air as is stored in about five fully-grown maple trees. Running the same load on the 'average' machine would release the carbon equivalent of about 280 maple trees, Cohn says.

Yannis Paschalidis, the Director of Boston Universitys Center for Information and Systems Engineering, proposes a better integration of data centers and energy grids, which he describes as demand-response models. The idea is to coordinate with the grid to reduce or increase consumption on-demand, depending on electricity supply and demand. This helps utilities better manage the grid and integrate more renewables into the production mix, Paschalidis says.

For instance, when renewable energy supplies such as solar and wind power are scarce, data centers can be instructed to reduce consumption by slowing down computation jobs and putting low-priority AI tasks on pause. And when theres an abundance of renewable energy, the data centers can increase consumption by speeding up computations.

The smart integration of power grids and AI data centers, Paschalidis says, will help manage the intermittency of renewable energy sources while also reducing the need to have too much stand-by capacity in dormant electricity plants.

Scientists and researchers are looking for ways to create AI systems that dont need huge amounts of data during training. After all, the human brain, which AI scientists try to replicate, uses a fraction of the data and power that current AI systems use.

During this years AAAI Conference, Yann LeCun, a deep-learning pioneer, discussed self-supervised learning, deep-learning systems that can learn with much less data. Others, including cognitive scientist Gary Marcus, believe that the way forward is hybrid artificial intelligence, a combination of neural networks and the more classic rule-based approach to AI. Hybrid AI systems have proven to be more data- and energy-efficient than pure neural-network-based systems.

It's clear that the human brain doesnt require large amounts of labeled data. We can generalize from relatively few examples and figure out the world using common sense. Thus, 'semi-supervised' or 'unsupervised' learning requires far less data and computation, which leads to both faster computation and less energy use, Cohn says.

Read this article:
AI Could Save the World, If It Doesnt Ruin the Environment First - PCMag

Read More..

AMD Extends 2nd Gen AMD EPYC Processor Family with New Processors – IT News Online

IT News Online Staff2020-04-18

AMD has extended the 2nd Gen AMD EPYC processor family with three new processors that combine the balanced and efficient AMD Infinity architecture with higher speed "Zen 2" cores for optimal performance on database, commercial high-performance computing (HPC) and hyperconverged infrastructure workloads.

The AMD EPYC 7Fx2 processors provide new performance capabilities for workloads in the heart of the enterprise market including database with up to 17 percent higher SQL Server performance compared to the competition, hyperconverged infrastructure with up to 47 percent higher VMmark 3.1 score (using vSAN as the storage tier in a 4-node cluster) compared to the competition for a new world record, and commercial high-performance computing (HPC) with up to 94 percent higher per core computational fluid dynamics individual application performance compared to the competition.

"AMD EPYC continues to redefine the modern data center, and with the addition of three powerful new processors we are enabling our customers to unlock even better outcomes at the heart of the enterprise market," said Dan McNamara, senior vice president and general manager, server business unit, AMD. "With our trusted partners, together we are pushing the limits of per core performance and value in hyperconverged infrastructure, commercial HPC and relational database workloads."

A Balanced System That's More than Gigahertz

The new 2nd Gen AMD EPYC 7Fx2 processors provide leading per core performance and breakthrough value, while adding the highest per core performance of the EPYC family.

The performance of these new processors comes from a balanced architecture that combines high-performance "Zen 2" cores, innovations in system design like PCIe 4 and DDR4-3200 memory, and the AMD Infinity architecture, to provide customers with optimum system performance that enables better real world application performance.

Ecosystem Growing with AMD EPYC

The ecosystem of OEMs, cloud providers, ISVs and IHVs using 2nd Gen AMD EPYC processors continues to grow, with existing OEMs and new partners adopting the new AMD EPYC 7Fx2 processors.

Dell Technologies will support all three processors across its entire lineup of AMD EPYC based Dell EMC PowerEdge servers, including the R6525, which holds a world record 2P Four-Node Benchmark Result on VMmark 3 with VMware vSAN.

"These new AMD EPYC 7Fx2 processors enable Dell EMC PowerEdge servers to drive substantial performance benefits for customer business applications like database and hyperconverged infrastructure, where Dell EMC PowerEdge servers hold a world record in benchmark performance. Our customers will truly benefit from these new processors as we continue to grow our AMD EPYC family of PowerEdge platforms," said Rajesh Pohani, vice president, Server Platform Product Management, Dell Technologies.

HPE continues to expand its offerings using 2nd Gen AMD EPYC processors with latest support of HPE SimpliVity, an intelligent hyper-converged infrastructure solution. HPE will also support all three AMD EPYC 7Fx2 processors on the recently announced HPE Apollo 2000 Gen10 Plus system, HPE ProLiant DL385 Gen10 Plus server and HPE ProLiant DX servers.

"We are pleased to expand support of the 2nd Gen AMD EPYC processors across our portfolios, which include new additions with the HPE Apollo 2000 Gen10 Plus system, HPE ProLiant DL385 Gen10 Plus server and HPE ProLiant DX servers to meet high-frequency and performance needs for our customers in high-performance computing and database environments," said Peter Ungaro, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE.

IBM Cloud is the first cloud provider to offer its clients the AMD EPYC 7F72 processors in their bare metal offering, providing access to fast, high core-count dual socket bare metal servers. Additionally, IBM recently announced the availability of its first bare metal server powered by the AMD EPYC 7642 processor.

"We are excited to be the first cloud provider to support the new AMD EPYC 7F72 processor. Now, IBM Cloud provides access to another high core-count dual socket bare metal server with high clock speed frequency, giving our clients more optimized platform choices for compute-intense workloads such as analytics, commercial HPC and EDA. We stay committed to enabling flexible and powerful bare metal experiences for clients to enhance performance and throughput," said Satinder Sethi, general manager, IBM Cloud Infrastructure Services.

Lenovo will support the new AMD EPYC 7Fx2 processors on its ThinkSystem SR635 and SR655 platforms. These ThinkSystem platforms are already a great choice for a variety of enterprise workloads including data analytics, software defined storage and infrastructure for remote workers. Lenovo's storage and PCIe capabilities coupled with AMD EPYC core count and I/O density will help provide customers with choice as their business needs evolve. These new higher frequency 2nd Gen AMD EPYC processors, with an increased core clock speed up to 15 percent, in the single socket ThinkSystem platform, provides customers with greater options for workloads where per core performance is critical. Lenovo's one socket optimized platforms with these new processors allow customers to deploy these platforms where traditionally two socket systems were used, providing power and SW licensing costs savings.

"Today's business dynamics are presenting customers with new challenges to improve speed, cost and performance. We feel confident we have the right portfolio to provide our customers with enhanced choice as organizations look to enable remote working capabilities and manage their increased data and storage requirements," said Kamran Amini, vice president and general manager, Server, Storage and Software Defined Infrastructure, Lenovo Data Center Group.

Microsoft recognizes the impact the new AMD EPYC 7Fx2 processors have on providing Microsoft data platform customers the best experience possible, including an up to 17 percent higher SQL Server TPM per core performance. "Microsoft data platform solutions help customers release the potential hidden in data and reveal insights and opportunities to transform a business. A critical part of this process is making sure a database has access to an efficient, powerful and fast processor and that's exactly what the new AMD EPYC 7Fx2 processors provide Microsoft data platform solutions customers," said Jamie Reding, SQL Server program manager, Microsoft.

Nutanix, in conjunction with HPE, announced that it expects that Nutanix HCI software will be supported on select AMD EPYC based HPE ProLiant servers by May. As well, HPE announced the upcoming availability of AMD EPYC 7Fx2 processors on HPE ProLiant DX servers in Q3.

"We are excited to have validated Nutanix's HCI software for 2nd Gen AMD EPYC processor based HPE ProLiant systems. This will bring 2nd Gen AMD EPYC processor support to Nutanix software, giving more flexibility and choice to our customers while unleashing greater workload performance for databases, analytics, VDI and other virtualized business critical applications," said Tarkan Maner, chief commercial officer, Nutanix.

Supermicro is launching the industry's first blade platform built for 2nd Gen AMD EPYC processors with immediate support for the new AMD EPYC 7Fx2 processors combined with integrated 25G Ethernet and optional 100G EDR InfiniBand support with 200G HDR in the near future. In addition, all Supermicro A+ platforms including Ultra, GPU, WIO, Twin and Mainstream systems will support the new AMD EPYC 7Fx2 processors immediately.

"Adding the new SuperBlade platform to our extensive portfolio of products supporting the 2nd Gen AMD EPYC processors gives our customers another powerful choice when redefining their modern data center. Leveraging support for the new AMD EPYC 7Fx2 processors, our latest SuperBlade and Supermicro A+ platforms further excel at database, EDA and other data-intensive workloads," said Vik Malyala, senior vice president, Field Application Engineering and Business Development, Supermicro.

VMware has added support for the new 2nd Gen AMD EPYC 7Fx2 processors, providing customers with access to powerful virtualization platforms.

"The 2nd Gen AMD EPYC 7Fx2 processors bring new value to VMware customers. They provide a unique balance of strong per core performance coupled with an industry-leading per-processor memory capacity of 4 TB. A key element of VMware vSphere, vSAN and now VMware Cloud Foundation market success has been our commitment to helping customers quickly adopt the latest hardware innovation," said Richard A. Brunner, chief technology officer, Server Platform Technologies, VMware.

The new processors are available now through multiple OEMs and IBM Cloud.

See more here:
AMD Extends 2nd Gen AMD EPYC Processor Family with New Processors - IT News Online

Read More..

Don’t buy a hard drive – get 5TB of cloud storage instead – TechRadar

Polarbackup has introduced a new ultra-affordable cloud storage service, offering rock-bottom prices without compromising on reliability or security - and TechRadar readers receive an exclusive 92% discount.

Polarbackup 5TB cloud storage - $79.99/66.75/AU$125This offer from Polarbackup is jaw-droppingly good. It's cheaper than purchasing a 5TB hard drive, and you gain access to full cloud backup capabilities to boot. Polarbackup is also operated by Zoolz, an established player in the cloud storage market, so you can be sure your data is secure in the long term.

View Deal

At $79.99/66.75 (about AU$125) for a lifetime 5TB subscription, Polarbackup is cheaper than buying a hard disk drive of equivalent capacity. Lower capacities are available, but the 5TB version remains the cheapest at less than $16 per TB.

Your data is never deleted (as the subscription never expires), you can back up an unlimited number of external devices - from USB drives to CCTV systems - and Polarbackup even supports file versioning.

Polarbackup supports both Windows and Mac and uses zero knowledge, 256-bit encryption to keep your files safe. The service is also operated by Zoolz (one of the best cloud storage providers on the market) so you can be sure your data is safe and secure in the long term.

Have you managed to get hold of a cheaper product with equivalent specifications? Let us know and we'll tip our hat to you.

Just bear in mind that this is a cold storage service, which means you wont be able to retrieve files instantaneously. You may have to wait up to 12 hours (but likely less) to access your files, which could pose issues for some.

Read the original here:
Don't buy a hard drive - get 5TB of cloud storage instead - TechRadar

Read More..

Your Zoom videos could live on in the cloud even after you delete them – CNET

Sarah Tew/CNET

If you clicked Record to Cloud during a Zoom meeting, you might have assumed Zoom and the cloud storage provider would have password-protected your video by default once it was uploaded. And if you deleted that video from your Zoom account, you might have assumed it was gone for good. But in the latest example of the security and privacy woes that continue to plague Zoom, a security researcher found a vulnerability that turned those assumptions on their heads.

A week ago, Phil Guimond discovered a vulnerability that allowed someone to search for stored Zoom videos using share links that contain part of a URL, such as a company or organization name. The videos could then be downloaded and viewed. Guimond also created a tool, called Zoombo, that exploited a limitation of Zoom's privacy protection, cracking passwords on videos that savvy users had manually protected. He discovered videos that were deleted remained available for several hours before disappearing.

(Disclosure: Guimond is an information security architect for CBS Interactive, of which CNET is a part, within the larger parent company of ViacomCBS.)

"Zoom has not considered security at all when developing their software," Guimond told CNET. "Their offerings have some of the highest amount of low-hanging-fruit vulnerabilities in the industry for a mainstream product."

On Saturday, Zoom rolled out an update after CNET inquired about the vulnerability. The app now adds a Captcha challenge when someone clicks on a share link. The update effectively stopped Zoombo, but left the core vulnerability unfixed. Hackers can still manually follow share links once a Captcha has been defeated. The company rolled out further security updates Tuesday to bolster the privacy of uploaded videos.

"Upon learning of this issue, we took immediate action to prevent brute-force attempts on password-protected recording pages by adding rate limit protections through reCaptcha," a Zoom spokesman told CNET. "To further strengthen security, we have also implemented complex password rules for all future cloud recordings, and the password protection setting is now turned on by default," a Zoom spokesman told CNET.

The new Zoom exploit was discovered as the video conference platform draws attention for security and privacy problems that have been exposed by the rapid growth of its user base. As the coronavirus pandemic forced millions of people to stay home over the past month, Zoom suddenly became the video meeting service of choice. Daily meeting participants on the platform surged from 10 million in December to 200 million in March.

As it grew in popularity, so did the number of people exposed to Zoom's privacy risks, with concerns ranging from built-in attention-tracking features to "Zoombombing," the practice of uninvited attendees breaking into and disrupting meetings with hate-filled or pornographic content. Zoom has also allegedly shared user data with Facebook, prompting at least three lawsuits against the company.

Now playing: Watch this: Zoom privacy: How to keep spying eyes out of your meetings

5:45

Share links are just what they sound like: links that users share to invite someone to a Zoom meeting. They're simpler than a video's lengthier permanent URL and usually include part of a company's or organization's name. Some share links can be found through URL-targeted Google searches, and the links' corresponding videos could then be targets for malicious actors to download if users didn't manually password-protect them. Even those that have been protected were previously limited in password length, making them vulnerable to attack.

Guimond, who said he presented his findings to Zoom but didn't get a response, tried password-protecting his own videos because they weren't protected by default. After that, he wrote some code to bombard Zoom with attempts to open the video, a process known as brute force. The passwords could be cracked, he said.

A growing list of government entities domestically and globally have restricted the use of Zoom for state business. In early April, the German Ministry of Foreign Affairs reportedly cautioned staff against the software. Singapore banned teachers from using it to teach remotely.

In the same week, the US Senate reportedly told members to avoid using Zoom for remote work during the coronavirus lockdown.

One of Guimond's core security concerns is that Zoom stores all Record to Cloud videos in a single bucket, the term for an unprotected swath of Amazon cloud storage space. Anyone can access a video if they have the link, a threat similar to one previously reported by The Washington Post, but which poses a more specific threat to corporate accounts.

Once someone obtains a video's permanent link, they can also capture a Zoom meeting ID. That meeting ID could allow them to target a user individually, potentially opening up that user to Zoombombing and other privacy invasions.

To illustrate the potential privacy risk to companies, Guimond said that if someone were able to break into a corporate Slack conversation, a place where Zoom share links are routinely swapped, the hacker would have lots of opportunity to compromise corporate privacy.

"These [share links] don't require authentication by default," Guimond said. "You can even open them in a private window.

While Zoom's Tuesday update changed the software's default upload option to require some form of authentication, links to any videos recorded to the cloud prior to the update could still be vulnerable. In the company's Tuesday blog post, Zoom said "existing shared recordings are not affected" by the updates.

Asked whether Zoom has taken any steps -- or plans to -- to protect the privacy of videos previously recorded to the cloud, the company urged users to take their own precautions.

"While we are not changing settings for existing recordings, if users wish to turn on password protection or restrict access to authenticated users, they can do so at any time and we welcome them to do so," said the Zoom spokesman.

"In general, should hosts choose to share recordings publicly or with authenticated users, or upload their meeting recordings anywhere else, we urge them to use extreme caution and be transparent with meeting participants, giving careful consideration to whether the meeting contains sensitive information and to participants' reasonable expectations," he said.

If you're thinking it may be easier to simply delete those videos, you may need to allot more time. When Guimond looked into the security of permanent links associated with Zoom meetings, he found that deleted Zoom videos were still accessible for a few hours following deletion.

"If you add a password and delete the file, you reduce your risk," he said. "But it may still exist on the [Amazon Web Services storage] bucket," said Guimond.

When CNET inquired about Guimond's discovery, Zoom said it would investigate the matter.

"Based on our current findings, the unique URL to access a recording view page immediately stops working after deletion, so it cannot be accessed," said a Zoom spokesman. "However, if someone has recently watched the recording around the time it is deleted, they can continue to watch for a period of time before the viewing session expires. We continue to investigate the matter."

Asked what users and organizations can do to improve the privacy and security of videos previously uploaded to the cloud, Guimond advised taking another look at the settings.

"I'd recommend you go back and password-protect them with a strong password, and possibly delete them afterwards," he said.

Visit link:
Your Zoom videos could live on in the cloud even after you delete them - CNET

Read More..

Pioneer DJ announce rekordbox 6 with cloud storage and sync – DJ Mag

Pioneer DJ has announced a major update to rekordbox. Version 6 adds a feature called Cloud Library Sync, allowing users to upload their collections to a linked Dropbox account, where they can access it across a maximum of four devices. Metadata and analysis information is also stored in the cloud and can be edited and syncd back to the cloud.

Performance Mode the djing aspect of rekordbox is now free for all users, although not all hardware controllers will work in the free version. Pioneer DJ has also introduced new subscription tiers Free, Core and Creative. Core costs 7.99 per month and Creative costs 9.99 per month, both of which are introductory offers running until July when theyll increase to 9.99 and 14.99 a month. Users who subscribe before the end of the introductory offer will continue to pay the lower price indefinitely.

Other new features include an auto-relocate for missing files and a new Light skin mode to help use the software outside. Promo company InFlyte has also been added to rekordbox 6, with the ability to access and download your Promo Locker directly from within the software. Any cue points and loop points will remain from previous collections when you upgrade to version 6.

There's also a new rekordbox iOS app that allows you to make playlists, add cue and loop points, change metadata and more. The app syncs with your Dropbox account and any changes are automatically available on your other devices. Watch our overview video below for more details on rekordbox 6. The new app also lets you DJ from your phone, using the mobile as a portable storage device.

See the article here:
Pioneer DJ announce rekordbox 6 with cloud storage and sync - DJ Mag

Read More..