It's the same reason you don't want Chinese equipment in your telecommunications infrastructure. You can't trust what the Chinese government will do to it or with it.
OK, so the "Storing data in the network ... " title made me remember something.
If you transmit a message to Mars, say a rover command sequence, and the outgoing buffer is deleted on the sending side (the original code is preserved, but the transmission-encoded sequence doesn't stick around), then that data, for 20-90 minutes, exists nowhere _except_ space. It's just random-looking electrical fluctuations that are propagating through whatever is out there until it hits a conducting piece of metal millions of miles away and energizes a cap bank enough to be measured by a digital circuit and reconstructed into data.
So, if you calculate the data rate (9600 baud, even), and set up a loopback/echo transmitter on Mars, you could store ~4 MB "in space". If you're using lasers, it's >100x as much.
I definitely was not aware Spotify DRM had been cracked to enable downloading at scale like this.
The thing is, this doesn't even seem particularly useful for average consumers/listeners, since Spotify itself is so convenient, and trying to locate individual tracks in massive torrent files of presumably 10,000's of tracks each sounds horrible.
But this does seem like it will be a godsend for researchers working on things like music classification and generation. The only thing is, you can't really publicly admit exactly what dataset you trained/tested on...?
Definitely wondering if this was in response to desire from AI researchers/companies who wanted this stuff. Or if the major record labels already license their entire catalogs for training purposes cheaply enough, so this really is just solely intended as a preservation effort?
I work on campus (very very close to the engineering building) and I previously lived near Brookline. So all of this hits home.
But what got me was the tipster who blew wide open the case is reportedly a homeless Brown graduate who lived in the basement of the engineering building (a la South Korean film Parasite). It made me so sad but also not surprised, that building does have a single occupancy bathroom with showers; and no keycard access was needed in the evening until 7pm.
So it made sense to me that he or she would've used that building for shelter and comfort. Also it didn't boggle my mind at all that a Brown grad (from the picture, the tipster looked like a artistic Brown student vs. the careerist type) would be homeless - given that I known many of my classmates who have a certain personality, brilliant but also idealistic/uncompromising that made them brittle unfortunately in a society that rewards conformity, settling and stability.
I can't get over the fact that two Brown student whom presumably have fallen on the wayside of society have chosen two different paths, (1) the homeless guy who still perseveres even in the basement of Barrus & Holley for 15 years a la Parasite after 2010 graduation but still has the situational awareness and rises to the occasion to give the biggest tip to the Providence Police, (2) the other guy who harbors so much resentment over a course of 25 years to plan a trip from Florida to gun down innocent kids who are 18 and 19 and his classmate when they were 18 and 19 year old.
Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline.
I'm a little frustrated with articles like this that scattershot their critique by conflating genuine failures with problems that even FAANGs struggle with.
In particular, I don't love it when an article attacks a best practice as a cheap gotcha:
"and this time it was super easy! After some basic reversing of the Tapo Android app, I found out that TP-Link have their entire firmware repository in an open S3 bucket. No authentication required. So, you can list and download every version of every firmware they’ve ever released for any device they ever produced"
That is a good thing - don't encourage security through obscurity! The impact of an article like this is as likely to get management to prescribe a ham-handed mandate to lock down firmware as it is to get them to properly upgrade their security practices.
To put this into perspective, What.CD [0] was widely considered to be the music library of Alexandria, unparalleled in both its high quality standard and it's depth. What had in the ballpark of a few million torrents when it got raided and shut down. Anna's rip of Spotify includes roughly 186 million unique records. Granted, the tail end is a mixed bag of bot music and whatnot, but the scale is staggering.
Wind gusts were reaching 125 MPH in Boulder county, if anyone’s curious. A lot of power was shut off preemptively to prevent downed power lines from starting wildfires. Energy providers gave warning to locals in advance. Shame that NIST’s backup generator failed, though.
Some people in the US deride it's close allies as "freeloaders" because they choose to use and buy US tech, reinforcing the US's position as a global powerhouse. (Meanwhile US tech is built on the shoulders of their allies.) Now we see these same allies are starting to look inward and invest in technology they own completely because the US is acting decisively not like an ally. Something unthinkable since WW2.
I don't see this news as anything but a good thing. For every technology out there, the EU needs a native alternative. It's clear the current US administration wants to make the EU worse based on a politics of grievance.
Self-hosting is more a question of responsibility I'd say. I am running a couple of SaaS products and self-host at much better performance at a fraction of the cost of running this on AWS. It's amazing and it works perfectly fine.
For client projects, however, I always try and sell them on paying the AWS fees, simply because it shifts the responsibility of the hardware being "up" to someone else. It does not inherently solve the downtime problem, but it allows me to say, "we'll have to wait until they've sorted this out, Ikea and Disney are down, too."
Doesn't always work like that and isn't always a tried-and-true excuse, but generally lets me sleep much better at night.
With limited budgets, however, it's hard to accept the cost of RDS (and we're talking with at least one staging environment) when comparing it to a very tight 3-node Galera cluster running on Hetzner at barely a couple of bucks a month.
Or Cloudflare, titan at the front, being down again today and the past two days (intermittently) after also being down a few weeks ago and earlier this year as well. Also had SQS queues time out several times this week, they picked up again shortly, but it's not like those things ...never happen on managed environments. They happen quite a bit.
If Flock truly believed that the domain name infringes on their trademark, they would file an ICANN UDRP complaint instead of Cloudflare and Hetzner abuse reports.
But they don't, because the former would require them to perjure themselves, and the latter just requires them to lie to a hosting company.
This is why it’s almost always wrong for library functions to log anything, even on ”errors”. Pass the status up through return values or exceptions. As a library author you have no clue as how an application might use it. Multi threading, retry loops and expected failures will turn what’s a significant event in one context into what’s not even worthy of a debug log in another. No rule without exceptions of course, one valid case could be for example truly slow operations where progress reports are expected. Modern tracing telemetry with sampling can be another solution for the paranoid.
> When implementing logging, it's important to distinguish between an error from the perspective of an individual operation and an error from the perspective of the overall program or system. Individual operations may well experience errors that are not error level log events for the overall program. You could say that an operation error is anything that prevents an operation from completing successfully, while a program level error is something that prevents the program as a whole from working right.
This is a nontrivial problem when using properly modularized code and libraries that perform logging. They can’t tell whether their operational error is also a program-level error, which can depend on usage context, but they still want to log the operational error themselves, in order to provide the details that aren’t accessible to higher-level code. This lower-level logging has to choose some status.
Should only “top-level” code ever log an error? That can make it difficult to identify the low-level root causes of a top-level failure. It also can hamper modularization, because it means you can’t repackage one program’s high-level code as a library for use by other programs, without somehow factoring out the logging code again.
The apparent information gathering and brutal review process is unbelievable here. If I'm understanding this correctly, the requirement is that eg Epic Game Store must register and upload every single APK for every app they offer, and cannot offer it in their store until Google approves it, which may take a week or more - including every time the app updates.
Meanwhile they get full competitive insight into which apps are being added to Epics store, their download rates apparently, and they even get the APKs to boot, potentially making it easier for those app devs to onboard if they like, and can pressure them to do so by dragging their feet on that review process.
> Provide direct, publicly accessible customer support to end users through readily accessible communication channels.
This is an interesting requirement. I want to see someone provide the same level of support that Google does to see if it draws a ban.
LLMs have certainly become extremely useful for Software Engineers, they're very convincing (and pleasers, too) and I'm still unsure about the future of our day-to-day job.
But one thing that has scared me the most, is the trust of LLMs output to the general society. I believe that for software engineers it's really easy to see if it's being useful or not -- We can just run the code and see if the output is what we expected, if not, iterate it, and continue. There's still a professional looking to what it produces.
On the contrary, for more day-to-day usage of the general pubic, is getting really scary. I've had multiple members of my family using AI to ask for medical advice, life advice, and stuff were I still see hallucinations daily, but at the same time they're so convincing that it's hard for them not to trust them.
I still have seen fake quotes, fake investigations, fake news being spreaded by LLMs that have affected decisions (maybe, not as crucials yet but time will tell) and that's a danger that most software engineers just gross over.
Accountability is a big asterisk that everyone seems to ignore
The disater plan is to have a few dozens stratum 1 servers spread around the world, each connected to a distinct primary atomic clock, so that a catastrophic disaster needs to take down the global internet itself for all servers to become unreachable.
The failure of a single such server is far from a disaster.
An issue with having the legal limit at ~2-5ng/ml is that it makes habitual users be over the limit if they have smoked recently or not.[0] Making the prohibition seem unserious to some, not about safety but about punitive control, and in turn making it matter less if you smoke and drive as you are taking the risk of getting into trouble in any case.
The impairments of driving under the influence of alcohol have been extensively studied, but unless I have overlooked the literature it seems that the same investigations have not been carried out with THC.
Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
I'm the Manager of the Computing group at JILA at CU, where utcnist*.colorado.edu used to be housed. Those machines were, for years, consistently the highest bandwidth usage computers on campus.
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
My country spends less on defence as a percentage of GDP than the US. But it spends much of that with US companies. This is not Freeloading. It was a deal. Cancel TSR-2, and buy American and we will lend you some money. Cancel your nuclear program and buy US submarine launched missiles and we will help you look after yourself. Now let Visa and Mastercard skim off all your transactions and we will keep you secure to keep the money flowing. Sweetheart tax deals for US companies to operate, and we will keep you safe to keep the money flowing. It is not Freeloading, it is colonialism
> Server Logs
> Like all web services, our servers may log:
> IP addresses of visitors
> Request timestamps
> User agent strings
> These logs are used for security and debugging purposes and are not linked to your account.
The fact that this is being introduced after the whole Epic/Apple thing clearly shows that the penalties in that case were not nearly severe enough and the standards set were not nearly stringent enough. The mere attempt to engage in policies like this should result in fines in the hundreds of billions.
> employing engineers to manage self-hosted databases is more cost effective than outsourcing
Every company out there is using the cloud and yet still employs infrastructure engineers to deal with its complexity. The "cloud" reducing staff costs is and was always a lie.
PaaS platforms (Heroku, Render, Railway) can legitimately be operated by your average dev and not have to hire a dedicated person; those cost even more though.
Another limitation of both the cloud and PaaS is that they are only responsible for the infrastructure/services you use; they will not touch your application at all. Can your application automatically recover from a slow/intermittent network, a DB failover (that you can't even test because your cloud providers' failover and failure modes are a black box), and so on? Otherwise you're waking up at 3am no matter what.
The US means to undermine the EU: https://www.dw.com/en/will-trump-pull-italy-austria-poland-h...
The US means to annex European territory: https://www.bbc.com/news/articles/c0j9l08902eo
It's the same reason you don't want Chinese equipment in your telecommunications infrastructure. You can't trust what the Chinese government will do to it or with it.