Verizon Wireless uniquely identifies your traffic for all to see

by
in internet on (#2TRD)
Wired reports that Verizon inserts a unique identifier into all HTTP requests going over its wireless network, subverting Do Not Track, private browsing sessions, using different browsers, or moving around their network. Verizon has an opt out page, but it only opts you out of having it being used by Verizon and its partners from targeting ads based on it. Obviously, anyone else seeing the headers are under no agreement to not use them to build a profile of you. There are anecdotal reports AT&T may be doing the same. Security researcher Kenneth White set up a page to check for this header with more information.

More than 350,000 AT&T customers apply for "cramming" refunds

by
in legal on (#2TRC)
After spending most of the last decade profiting off of cramming, AT&T this month was finally held accountable by the government and fined $105 million by the FTC, FCC, and state governments. A similar investigation is ongoing again T-Mobile, and you can likely expect similar settlements in time with both Verizon and Sprint, who also turned a blind eye for years while scammers bilked their customers (because they netted 30-40% of the profits). The FTC case against AT&T is a great read detailing at length how AT&T not only turned a blind eye to the scams, but actually made it harder for customers to identify they were being scammed and to obtain refunds.

With the customer refund process underway, the FTC tells Time that more than 359,000 customers have already applied for refunds, with many many more expected. AT&T of course generates $105 million in about the time it took me to write this post, and the money they made off these scams was potentially dozens of times larger than the fine. Still, it's nice to see the government do its job when big companies are involved, as for most of the decade the FTC and FCC ignored how large carriers helped make these scams possible. Customers need to file their claim before May 1, 2015.

FCC Postpones Auction Of Broadcast TV Spectrum To 2016

by
in mobile on (#2TQT)
The FCC has been working on a voluntary auction of broadcast TV frequencies for years, with plans to have it take place in mid-2015. But today the agency says it will postpone the sale to early 2016 as it grapples with a lawsuit from the National Association of Broadcasters complaining that many TV stations would end up with reduced coverage areas. Supporters of the auction say that unless wireless service providers have more spectrum, the fast-growing ranks of consumers using smart phones, tablets, and other mobile devices will face dropped calls, dead zones, slow speeds, and high prices. The Obama administration is eager to free up 300MHz of bandwidth over five years, and 500MHz over a decade. That will be hard to accomplish without help from broadcasters - the biggest users of spectrum outside of the military, and operating on frequencies with propagation characteristics that are particularly desirable for mobile service providers.

The FCC has also said that its auction could be a windfall for some stations because they would share some of the proceeds. In fact a full-power TV station in Los Angeles could get as much as $570 million for its spectrum in the federal incentive auction. It's little wonder, then, that Los Angeles area public broadcast stations KCET and KLCS already announced joining forces to split a single over-the-air broadcast television channel, even as their business and programming operations remain separate, in order to free a channel for auction.

This delay comes shortly after the FCC pushed back the digital switch-over date for translators and low-power TV stations (from September 2015) allowing them another year to see how the auction results will affect their licenses, but now may require yet another delay. Which seems just as well, as the spectrum auction actually gives no consideration to their facilities at all, likely repurposing their channels, with no guarantee there will be any others slots left available for them to switch over to. This has some lawmakers taking-up their cause trying to ensure the survival of small community TV stations, and all broadcast TV in remote areas.

New G.fast standard offers gigabit DSL over short distances

by
in hardware on (#2TQA)
story imageAt the Broadband World Forum in Amsterdam this week, several companies are announcing and demonstrating products that bring DSL -- or digital subscriber line -- into a future with a speed of 1 gigabit per second. That's about 1,000 times the data-transfer speed the technology offered when it arrived in the late 1990s. The DSL upgrade comes through a new technology called G.fast. The technology should arrive in homes starting in 2016.

Much of the world doesn't have cable-TV infrastructure at all, and still less of it has fiber-optic connections. Phone networks, though, are widely used, and covered about 422 million DSL subscribers globally in 2013, according to analyst firm IHS. That should rise to 480 million by 2018. But reflecting the competitive threat to DSL equipment makers, fiber optic links are expected to spread much more rapidly -- from 113 million in 2013 to 200 million in 2018. European customers are likely to favor G.fast in particular, Triductor CEO Tan Yaolong said. That's because labor costs are very high in that region, which discourages extensive renovation projects.

To meet its full gigabit-per-second potential, G.fast connections will require broadband providers to use network equipment close to the customers' buildings -- 50 meters (about 160 feet) or less. A 200-meter distance will still be good enough for about 600Mbps. That's why broadband providers have been placing their network gear closer to homes -- often in boxes under sidewalks, in cabinets by roads, or boxes attached to telephone poles. That's also why it's so expensive to upgrade broadband networks: the ISPs have had to extend their networks to bring that network gear closer to their customers.

Lunduke says the LXDE Desktop is "Nothing to write home about"

by
in linux on (#2TP9)
Somebody just go ahead and call this article a troll. That's essentially what it is. But heck, maybe it will get some discussion going. Linux pundit Bryan Lunduke over at Network World has spent some time using the LXDE desktop and writes, I've used LXDE for weeks, and I'm still having trouble finding much to say about it. That's not a good sign. What the hell, man?
I feel like, after all this time, I should have something interesting to talk about. But I just plain don't.

It's fast, blisteringly fast. And it's damned lightweight too. After that, things get pretty boring. LXDE is built on GTK+, which means GTK-based apps are right at home. So that's a plus, I suppose. Though that really isn't a problem on any desktop environment I've tried so far. But" you know" it's something that I can write down about it. After that, things get average and mundane" in a hurry.
I'm not sure what the issue is: in my opinion, LXDE is simple, intuitive, and stays the heck out of your way so you can work. How can that possible be a negative? So, go ahead: insult the author. Then the guy who submitted this article (me) and posted it (me again). Then discuss. I'm verklempt.

Friday Distro: Redo Backup & Recovery

by
in linux on (#2TNW)
Too many Linux distros out there seem to be pet projects, focused on minor choices of theme and desktop environment. Redo Backup & Recovery is much more focused and is worth a look as a useful and important sysadmin tool. For starters, note they don't even bother to call it a distro: the fact that there's Linux underneath is not the point. But take a closer look and it's obvious that it's the power of Linux that makes this thing possible.

RB&R is simple: you download it and burn it to a disk or USB stick you then use to backup your machines. Boot the machine from your B&R disk, and let it work its magic. RB&R will mount the machine's partitions, and create a backup you can store elsewhere, say on a network share. If that machine ever gets misconfigured, virus infected, or anything else, you can simply restore one of the backups as though it were a bare-metal restore. It's essentially OS-agnostic, permitting sysadmins to backup and restore Windows or Linux machines with equal ease (it's not clear how good its Mac support is though!). It's graphical, auto-configs network shares, and because you make the backup by booting the machine from your disk/USB stick, you don't even have to have login rights on that machine.

The whole thing is a simple 250MB disk image, that gets you a graphical interface based on Openbox. Under the hood, it's simply a clever GPLv3 Perl script that leverages GTK2+ and Glade, plus partclone, which does the block-level disk backup or re-imaging. Partclone supports ext2/3/4, HFS+, reiserfs, reiser4, btrfs, vmfs3/5, xfs, jfs, ufs, ntfs, fat(12/16/32), and exfat.

I like this approach: they don't make much noise about Linux; they just present a useful tool any sysadmin would be grateful to be able to use. It is tightly focused on providing a single service and doesn't get wrapped up in troubles related to inevitable "feature creep". It does one thing, and does it well. I know my openSUSE box has recovery tools built into its YaST management system, but my brief test shows B&R is way easier, user-friendly, and hassle-free. I will be continuing to use it as recovering from an image is way easier and undoes the inevitable trouble I get into by downloading and experimenting with software packages that eventually combine to hose my system. Give it a look for yourself, and sleep a bit easier.

Tablets vs Chromebooks: an unexpected year

by
in hardware on (#2TNS)
So much for the Post-PC Revolution! Despite all the hype of tablets and their obvious benefits and use scenarios, the demise of traditional computing form factors seems to have been exaggerated. Never mind that 2014 will probably see over 250 million tablets shipped and sold, tablet sales are actually slowing. Analysts predict that Apple will probably face a year-long ipad sales dip, though it's hard to say what the effect of the most newly-released models will have.

But just as surprising, sales of Chromebooks have actually surged over the last two quarters. Gavin Clarke at the Register points out recent research that projects a doubling of the Chromebook market year on year, with HP, Samsung and Acer taking the lion's share of the market. They still represent a small share of the market, with only 4 million units shipped (of 300m convential PCs in total), so it's too soon to say the Chromebook revolution is here.

But it does show surprising potential in the traditional laptop form factor, and give some reason to wonder if, despite all the hype about tablets, phablets, and smart phones, consumers still find themselves reaching for a portable device with a great keyboard.

Can ICANN agree to oversight of its decisions?

by
in internet on (#2TNQ)
Central to the functioning of the Internet as we know it is the Domain Name System (DNS), and currently at least, central to DNS is the Internet Corporation for Assigned Names and Numbers (ICANN). And now, in the context of expanding mandate of DNS names (the new global top-level domain names), the Snowden revelations that showed how the US government has abused its role in overseeing ICANN, and a few bungle-headed decisions by ICANN itself, that may be up for revision and change. The Register writes: The future health of the internet comes down to ONE simple question: can ICANN be forced to agree to oversight of its decisions?
Such is the importance of the core that ICANN has been purposefully lumbered with an organisational design that tests the limits of sanity: three supporting organisations (one of which is broken up into another four components and then sub-divided again); four advisory committees; a 20-person board; and a permanent staff. Just like the internet, however, this global and decentralised organisation has a potential flaw: a central core of staff and board, without which the rest of it would start to erode and break apart.

And that's where the US government comes in. Since the creation of ICANN in 1999, the US government has overseen the organisation. Uncle Sam was supposed to step away within just a few years but for various complicated reasons, in every one of the 15 intervening years, ICANN's core - its staff and board - have made at least one fundamentally stupid decision, usually against the explicit wishes of the majority of the organisation.

And then refused to change its mind.

Each time it has done so, the United States administration has done the equivalent of walking into the room, smacking ICANN over the head and leaving again.
An interesting and important subject, and a well-written article (slightly longer that usual, at 4 pages).

Nexus 9 Tablet to be powered by Nvidia Tegra K1 64-bit chips

by
in hardware on (#2TMT)
story imageWho's happy about Google's newly-announced Nexus 9 Tablet? Well, lots of folks, but no one is happier than the guys at Nvidia. The Nexus 9 will be the first device to run new Android Lollipop, and powering it will be the 64-bit version of NVIDIA's Tegra K1.
[The Tegra K1 is] an ARM Holdings v8-based beast with dual 2.3 GHz Denver CPUs, and 192 Kepler GPU cores. That's a huge relief for NVIDIA shareholders, who still remember last year's painful Tegra 4 delays, which enabled Qualcomm's Snapdragon S4 to win a coveted spot in Google's second-gen Nexus 7 tablet.
It's a fast chip, and reportedly smokes both the Nexus 6 Adreno 420 and Galaxy Note 4 Mali T-760 in GPU tests. Furthermore, Using the superscalar micro-architecture, these chipsets support Dynamic Code Optimization and use Kepler GPU architecture, which powers some of the world's few fastest gaming PCs and supercomputers.

The pundits claim this new chip, with its kepler architecture will allow Google to bring state-of-the art graphics to Android for PC and console-class games. At a minimum, it will allow your Angry Birds to fly a hell of a lot faster.

[2014-10-22 19:43 edit: Typo: Nexus, not Nekus]

Future manned Mars exploration at risk due to lowered solar activity

by
in space on (#2TMA)
Does the worsening galactic cosmic radiation environment observed by CRaTER preclude future manned deep-space exploration? That is the conclusion of of a recently published paper that posits the recent decrease in solar activity has led to increased incidence of cosmic rays, which are dangerously radioactive. That may just put a damper on anyone interested in organizing manned exploration of the Red Planet.
The Sun and its solar wind are currently exhibiting extremely low densities and magnetic field strengths, representing states that have never been observed during the space age. The highly abnormal solar activity between cycles 23 and 24 has caused the longest solar minimum in over 80 years and continues into the unusually small solar maximum of cycle 24. As a result of the remarkably weak solar activity, we have also observed the highest fluxes of galactic cosmic rays in the space age, and relatively small solar energetic particle events. We use observations from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter (LRO) to examine the implications of these highly unusual solar conditions for human space exploration. We show that while these conditions are not a show-stopper for long duration missions (e.g., to the Moon, an asteroid, or Mars), galactic cosmic ray radiation remains a significant and worsening factor that limits mission durations.
Very interesting and at least for me counter intuitive. I would have thought: Less solar activity means less radiation. But it seems that the solar wind normally has the effect of reducing the amount of dangerous cosmic radiation that can reach the inner solar system. But to that, point, the article points out: While particles and radiation from the Sun are dangerous to astronauts, cosmic rays are even worse, so the effect of a solar calm is to make space even more radioactive than it already is.
...38394041424344454647...