Etsy error resulted in large amounts being withdrawn from some sellers’ bank accounts and credit cards

GettyImages-469939874.jpg?w=574

An Etsy bill payment error resulted in large amounts of money being withdrawn from several sellers’ bank accounts and credit cards on Friday morning. While the company says the issue has been resolved and was not the result of fraud, the headache isn’t over for affected sellers because Monday is a federal holiday in the United States, and many financial institutions are closed.

Etsy sellers are required to have a valid credit or debit card on file with Etsy in order to have a payment account. Boing Boing reports that complaints first began emerging in Etsy’s Community Forums and Twitter on Friday morning, when sellers began noticing amounts ranging from hundreds to tens of thousands of dollars had ben withdrawn or charged to those accounts.

An Etsy representative posted with a brief message in its forum stating that the company was “aware of a bill payment error affecting a small group of sellers which resulted in some cards being incorrectly charged.” Then on Sunday afternoon, Etsy sent a longer explanation to sellers. The company said it has already refunded all incorrectly charged cards and will be sending deposits on Tuesday.

“An update on recent issues affecting payment accounts

On Friday, February 15, a bill payment error affected a small group of sellers which resulted in some cards being incorrectly charged. Sellers who were affected have been notified by email, or by Etsy Conversations, and the issue that caused this has since been resolved.

As part of fixing this issue, all incorrectly charged cards have been refunded. It may take several business days for the refunded amounts to clear and settle in card accounts.  Also related to fixing the root problem, some sellers saw their scheduled deposit of funds returned to Etsy on Friday, February 15, and those deposits will now be sent on Tuesday, February 19.

For affected sellers, we are very sorry for the trouble or concern this may have caused. Our first priority has been to correct the issue. This was not a fraud issue, but instead an error related to a site change which affects a small group of sellers and is unrelated to buyers’ purchases.

This is an issue we do not take lightly. We’ve assembled a Payments task force, including senior executives across Etsy, to address any concerns or troubles resulting from this error. We will refund any undue fees associated with this incorrect charge and change in deposit schedule. We don’t expect this error to impact additional sellers going forward.”

The explanation was not enough for many sellers, who said hourly updates should have been posted for a problem of this magnitude, and that Etsy had not addressed how it will compensate them for overdraft or late fees, or if the returned deposits will appear on their 1099s. TechCrunch has contacted Etsy for comment.

Let’s block ads! (Why?)

http://feedproxy.google.com/~r/Techcrunch/~3/sCCZCHjQenc/

Advertisements

We’re all second-screening. Here’s how you’re doing it wrong.

TwitterFacebook

Second-screening — watching TV while also looking at your phone, tablet or laptop — is probably the  most widely adopted destructive behavior of the decade. We keep hearing that it’s bad for us; we keep doing it regardless. It’s the smoking of the 2010s. 

Psychologists were sounding the alarm as early as 2012 that this kind of screen-based multitasking seemed to be correlated with depression and anxiety. Did we listen? Did we hell. Back then, according to Nielsen, a mere (!) 40 percent of American adults looked at their phones or tablets every day while parked in front of the tube. By 2017, according to eMarketer, that number had climbed to over 70 percentRead more…

More about Second Screen, Culture, and Web Culture

https://mashable.com/article/second-screening/

5 GHz Wi-Fi Isn’t Always Better Than 2.4 GHz Wi-Fi

Wireless router and kids using a laptop in homeCasezy idea/Shutterstock.com

Are you having trouble with your Wi-Fi connection? Try using 2.4 GHz instead of 5 GHz. Sure, 5 GHz Wi-Fi is newer, faster, and less congested—but it has a weakness. 2.4 GHz is better at covering large areas and penetrating through solid objects.

5 GHz vs. 2.4 GHz: What’s the Difference?

Wi-Fi can run on two different “bands” of radio frequency: 5 GHz and 2.4 GHz. 5 GHz Wi-Fi went mainstream with 802.11n—now known as Wi-Fi 4—which was introduced back in 2009. Before that, Wi-Fi was largely 2.4 GHz.

This was a big upgrade! 5 GHz uses shorter radio waves, and that provides faster speeds. WiGig takes this further and operates on the 60 GHz band. That means even shorter radio waves, resulting in even faster speeds over a much smaller distance.

There’s also much less congestion with 5 GHz. That means a more solid, reliable wireless connection, especially in dense areas with a lot of networks and devices. Traditional cordless telephones and wireless baby monitors also operate on 2.4 GHz. That means they only interfere with 2.4 GHz Wi-Fi—not 5 GHz Wi-Fi.

In summary, 5 GHz is faster and provides a more reliable connection. It’s the newer technology, and it’s tempting to use 5 GHz all the time and write off 2.4 GHz Wi-Fi. But 5 GHz Wi-Fi’s shorter radio waves mean it can cover less distance and isn’t at good as penetrating through solid objects as 2.4 GHz Wi-Fi is. In other words, 2.4 GHz can cover a larger area and is better at getting through walls.

RELATED: What’s the Difference Between 2.4 and 5-Ghz Wi-Fi (and Which Should I Use)?

You Can Use Both With One Router

Modern routers are generally “dual-band” routers and can simultaneously operate separate Wi-Fi networks on the 5 GHz and 2.4 GHz frequencies. Some are “tri-band routers” that can provide a 2.4 GHz signal along with two separate 5 GHz signals for less congestion among Wi-Fi devices operating on 5 GHz.

This isn’t just a compatibility feature for old devices that only support 2.4 GHz Wi-Fi. There are times you’ll want 2.4 GHz Wi-Fi even with a modern device that supports 5 GHz.

Routers can be configured in one of two ways: They can hide the difference between the 2.4 GHz and 5 GHz networks or expose it. It all depends on how you name the two separate Wi-Fi networks.

Read the remaining 20 paragraphs

https://www.howtogeek.com/405105/5ghz-wi-fi-isnt-always-better-than-2.4ghz-wi-fi/

New Apple rumor is every fan’s dream come true

TwitterFacebook

Apple will launch a ton of new gear this year, a new report claims. And some of it has been on our wish list for a long, long time. 

Apple analyst Ming-Chi Kuo, known for his accurate predictions, has laid out Apple’s hardware plans for the year in a huge note (via MacRumors), and they include a fresh size for the MacBook Pro, new iPads, and a huge, lust-worthy new monitor. 

According to Kuo, Apple plans to launch a MacBook Pro with a 16-inch to 16.5-inch display with an “all-new” design. He offers no other details, but with that screen size, it’s likely to sit atop of the company’s laptop lineup (the 17-inch Pro was discontinued in 2012). Apple is often criticized for turning its MacBook Pro into a laptop aimed at the general populace instead of professionals; perhaps this new, larger variant will be the pro’s Pro with top specs and (fingers crossed) more ports. Read more…

More about Apple, Iphone, Ipad, Macbook Pro, and Airpods

https://mashable.com/article/apple-hardware-2019/

How to Embed a YouTube Video in PowerPoint

powerpoint logo

During a presentation, a mix of media always performs best. Using images, graphs, charts, and videos not only makes your presentation more informative but also more engaging for the audience. If you have a YouTube video you’d like to use during your presentation, it’s as simple as embedding it in a slide. Here’s how.

Finding a YouTube Video’s Embed Code

Rather than linking to a YouTube video in your presentation, embedding it in the slide is usually the better option. It gives your presentation a more professional look because you won’t be leaving your slide to pop open the YouTube website. Keep in mind, though, that even with the video embedded in your presentation, you’ll still need to be connected to the internet to play the video.

First, head over to YouTube and find the video you want to embed. Once you’re there, select the “Share” option, which you’ll find in the video description.

share button on YouTube

A window will appear, giving you a few different vehicles for sharing the video. Go ahead and click the “Embed” option in the “Share a link” section.

embed button

Another window will appear, providing the embed code along with a few other options.  If you wanting to start the video at a particular time, select the “Start at” box and enter the time when you’d like the video to start. Additionally, you can select whether you’d like to player controls to appear and if you want to enable privacy-enhanced mode.

Note: Privacy-enhanced mode keeps YouTube from storing information about visitors that visit your website that the video is embedded on unless they play the video. Since we will be using the embed code in a PowerPoint presentation, this option is not necessary.

Read the remaining 23 paragraphs

https://www.howtogeek.com/402488/how-to-embed-a-youtube-video-in-powerpoint/

VPN protocol WireGuard now has an official macOS app

WireGuard could be the most promising VPN protocol in years. It lets you establish a connection with a VPN server that is supposed to be faster, more secure and more flexible at the same time. The developers launched a brand new app in the Mac App Store today.

WireGuard isn’t a VPN service, it’s a VPN protocol, just like OpenVPN or IPsec. The best thing about it is that it can maintain a VPN connection even if you change your Wi-Fi network, plug in an Ethernet cable or your laptop goes to sleep.

But if you want to use WireGuard for your VPN connection you need to have a VPN server that supports it, and a device that supports connecting to it. You can already download the WireGuard app on Android and iOS, but today’s release is all about macOS.

The team behind WireGuard has been working on a macOS implementation for a while. But it wasn’t as straightforward as an app. You could install WireGuard-tools using Homebrew and then establish a connection using a command line in the Terminal.

It’s much easier now, as you just have to download an app in the Mac App Store and add your server profile. The app is a drop-down menu in the menu bar. You can manage your tunnel and activate on-demand connections for some scenarios. For instance, you could choose to activate your VPN exclusively if you’re connected to the internet using Wi-Fi, and not Ethernet.

I tried the app and it’s as snappy and reliable as expected. The app leverages Apple’s standard Network Extension API to add VPN tunnels to the network panel in the settings.

If you want to try WireGuard yourself, I recommend building your own VPN server using Algo VPN. Don’t trust any VPN company that sells you a subscription or lets you access free VPN servers. A VPN company can see all your internet traffic on their own servers, which is a big security risk.

Assume that those companies analyze your browsing habits, sell them to advertisers, inject their own ads on non-secure pages or steal your identity. The worst of them can hand to authorities a ton of data about your online life.

They lie in privacy policies and often don’t even have an About page with the names of people working for those companies. They spend a ton of money buying reviews and endorsements. You should avoid VPN companies at all costs.

If you absolutely need a VPN server because you can’t trust the Wi-Fi network or you’re traveling to a country with censored websites, make sure you trust the server.

Let’s block ads! (Why?)

http://feedproxy.google.com/~r/Techcrunch/~3/O3KRpMtMtVg/

YouTube under fire for recommending videos of kids with inappropriate comments

youtube-newsletter.jpg?w=640

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an ice bath or ice lolly sucking “challenge.”

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole,” accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party.”

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its “up next” section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines,” or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017, several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low-quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul.”

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew toward recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

However, the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report to law enforcement accounts found making inappropriate comments about kids.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user-generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned versus the scale of the task.

Another key point YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context in which it has been consumed.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers toward extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways,” citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the U.S.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for internet companies to face clear legal liabilities and even a legal duty care toward users vis-à-vis the content they distribute and monetize.

For example, U.K. regulators have made legislating on internet and social media safety a policy priority — with the government due to publish this winter a white paper setting out its plans for ruling platforms.

Let’s block ads! (Why?)

http://feedproxy.google.com/~r/Techcrunch/~3/gPLgMVC5x8k/

Android’s Real Security Problem is the Manufacturers

Samsung Galaxy S9 security patch dateCameron Summerson

If you’re running a Google Pixel handset, your phone is safe from a security hole that could let a PNG file completely wreck the system. If you’re using nearly any other Android handset, then your phone is vulnerable. This is a problem.

Google recently released the February security update for Pixel devices, which closes a hole that would allow malicious PNG files to “execute arbitrary code within the context of a privileged process.” In simpler terms, the code can run at a high level and steal your info—all you need to do is open the file. That’s it.

That means any PNG that comes to you—be it in an email, a messaging client, or even over MMS—could potentially hijack the system and steal valuable data. That is, on any phone that isn’t a Pixel, because they’re protected now. Samsung, LG, OnePlus, and most other manufacturers’ handsets are still susceptible to this bug. We have to start holding manufacturers to a higher standard when it comes to security updates. Period.

I currently have four Android phones within arm’s reach: Pixel 2 XL, Pixel 1, Samsung Galaxy S9, and OnePlus 6T. The two Pixels are patched and protected with the February update, but the S9 and 6T are only on the December security patches. That means any newer vulnerabilities—like this PNG one, for example—are unpatched on both of these handsets. Considering that Samsung Galaxy devices are among the most popular phones on the planet, this is troubling.

Google Pixel 2 XL security patch dateCameron Summerson

But it’s not just an issue because of the current problem. This is a dynamic problem that is a constant concern—or at least it should be. As long as there are new vulnerabilities, delayed security updates will always be an issue. So, to put that in simpler terms: this will always be an issue because vulnerabilities are guaranteed.

While Android “fragmentation” has long been an issue (since the platform was introduced, essentially) when it comes to full OS updates, this should not apply to security updates. These are not “new features are cool, and I want them” updates, these are crucial data-protecting updates. Regardless of whether they’re small or not, this isn’t something that should be overlooked by any consumer. Ever.

RELATED: Fragmentation Isn’t Android’s Fault, It’s the Manufacturers’

Currently, manufacturers are doing a terrible job of protecting their users, full stop. While not getting full OS updates (or even point releases) is annoying at best, not getting security updates is unacceptable. It sends a message that can’t be ignored: it says that your phone manufacturer doesn’t care about your data. Your info isn’t important enough for them to protect.

Security updates aren’t huge like full OS updates or even point releases. They’re released monthly by Google, so they’re much smaller and easier to bake into the system—even for third-party manufacturers. Again, there’s no real excuse not to make this a priority.

Read the remaining 7 paragraphs

https://www.howtogeek.com/404700/androids-real-security-problem-is-the-manufacturers/