This can be hard to explain to both engineers and laypeople because there's Arduino, then there's Arduino, and then there's Arduino...
For instance, "Arduino" could mean the Arduino branded boards, and the cloud based development stack.
On the other hand, "Arduino" has become almost a generic term for any microcontroller board that happens to support the (open source) Arduino API. As in: "Just throw an Arduino in there."
For instance I've got several ongoing projects using third party boards such as Teensy, where my entire relationship to Arduino is represented by a single line of code:
We get that, but when some elements of the stack have incredibly onerous terms added to them, the 2nd order and 3rd order Arduinos become license untenable particularly for low-level developers working with minimal docs
Am including a few different articles on this because viewpoint diversity is good for this issue
My expectation (well, maybe my hope) is that the Arduino community will diverge from the Arduino business and become self sustaining.
Arguably, the cloud based development environment may have been a good idea, especially since a lot of students are stuck with locked down Chromebooks that can't install the toolchain locally. And I lack the expertise to speculate about this, but it would seem to me that if an entire Python toolchain can run in a web app (e.g., Jupyter Lite), then maybe an embedded dev environment could be make to work in a similar way.
Are racism, misogyny, and superstition useful or useless? Now one could argue that those things are not right or left wing. There is no epistemology for what any given ideology -- political or religious -- consists of. An apologist can simply deny that "negative" things are part of their ideology.
> Are racism, misogyny, and superstition useful or useless?
Someone can just call you racist, misogynist, or whatever. Those labels don’t mean anything anymore because they’re used as weapons to shut down anyone who is ideologically opposed to you. Are some people racist or misogynistic, yes. But when you use these labels carelessly for anyone who says something you don’t like, you pave the way for actual racists and misogynists because they can call you unreasonable, inaccurate or unreliable.
And what is an apologist, exactly? If you’re unable to come up with a different way to look at something, that doesn’t make someone else who looks at the same thing in a different way an apologist. These are just political smears of the incurious or browbeaters.
Just because the leftists happen to align themselves with pro-social values that I prefer, like anti-racism or anti-misogyny, doesn’t mean they’re right about everything.
> Are racism, misogyny, and superstition useful or useless?
Obviously, they're useless and counter-productive.
There's no rational reason to think that there is a significant difference between people with different skin/eye/hair colour or between genders, but it's obviously a simplistic tribal belief. What we end up getting is a huge amount of wasted talent because of tribal politics - imagine how many Einstein-like intellects have died in poverty with no access to education.
It seems to me that people use racism and misogyny to help them stockpile wealth which is a big detriment to society - rather than working together, we have people working against other people.
Even the idea of "races" is itself flawed and has no meaningful basis apart from "they look different".
> Even the idea of "races" is itself flawed and has no meaningful basis apart from "they look different".
Agreed. There are genetic differences between families (think: extended blood relatives, but even those differences change after enough generations), but they’re hardly along racial lines.
Where are the junior devs while their code is being reviewed? I'm not a software developer, but I'd be loath to review someone's work unless they have enough skin in the game to be present for the review.
Code review is rarely done live. It's usually asynchronous, giving the reviewer plenty of time to read, digest, and give considered feedback on the changes.
Perhaps a spicy patch would involve some kind of meeting. Or maybe in a mentor/mentee situation where you'd want high-bandwidth communication.
Yeah when we first started, "code review" was a weekly meeting of pretty much the entire dev team (maybe 10 people). Not all commits were reviewed, it was random and the developer would be notified a couple of days in advance that his code was chosen for review so that he could prepare to demo and defend it.
Wow, that's a very arbitrary practice: do you remember roughly when was that?
I was in a team in 2006 where we did the regular, 2-approve-code-reviews-per-change-proposal (along with fully integrated CI/CD, some of it through signed email but not full diffs like Linux patchsets, but only "commands" what branch to merge where).
Around that time frame. We had CI and if you broke the build or tests failed it was your job to drop anything else you were doing and fix it. Nothing reached the review stage unless it could build and pass unit tests.
This was still practice at $BIG_FINANCE in the couple of years just before covid, although by that point such team reviews were reducing in importance and prominence.
Am old enough that this was status quo for part of my career, and have also been in some groups that did this as a rejection of modern code review techniques.
There are pros & cons to both sides. As you point out it's quite expensive in terms of time to do the in person style. Getting several people together is a big hassle. I've found that the code reviews themselves, and what people get out of them, are wildly different though. In person code reviews have been much more holistic in my experience, sometimes bordering on bigger picture planning. And much better as a learning tool for other people involved. Whereas the diff style online code review tends to be more focused on the immediate concerns.
There's not a right or wrong answer between those tradeoffs, but people need to realize they're not the same thing.
I would guess that 3 part code review would actually be most effective. Likely even save on costs. First part is walkthrough on call, next independent review and comments. Then per need an other call over fixes or discussion.
Probably spend more time on it, but would share the understanding and alignment.
And yet... is it? Realtime means real discussion, and opportunity to align ever so slightly on a common standard (which we should write down!), and an opportunity to share tacit knowledge.
It also increases the coverage area of code that each developer is at least somewhat familiar with.
On a side note, I would love if the default was for these code reviews to be recorded. That way 2 years later when I am asked to modify some module that no one has touched in that span, I could at least watch the code review and gleem something about how/why this was architect-ed the way it was.
A senior dev should be mentoring and talking to a junior dev about a task well before it hits the review stage. You should discuss each task with them on a high level before assigning it, so they understand the task and its requirements first, then the review is more of a formality because you were involved at each step.
Also communal RFCs, RFPs, Roadmapping, Architecture/Design Proposals, Design Docs and/or Reviews help socialize/diffuse org standards and expectations.
I found these help ground the mentorship and discussions between junior-senior devs. And so even for the enterprising aka proactive junior devs who might start working on something in advance of plans/roadmaps, by the time they present that work for review, if the work followed org architectural and design patterns, the review and acceptance process flows smoothly.
In my juinior days I was taught: if the org doesn't have a design or architectural SOP for the thing you're doing, find a couple of respectable RFCs from the internet, pick the three you like, and implement one. It's so much easier to stand on the shoulders of giants than to try and be the giant yourself.
And even then, in my experience, they work more like support tickets than business email, for which there are loose norms for response time, etc. Unless there’s a specific reason it needs to be urgently handled, people will prioritize other tasks.
As someone else mentioned, the process is async. But I achieve a similar effect by requiring my team to review their own PRs before they expect a senior developer to review them and approve for merging.
That solves some of the problem with people thinking it's okay to fire off a huge AI slop PR and make it the reviewer's responsibility to see how much the LLM hallucinated. No, you have to look at yourself first, because it's YOUR code no matter what tool you used to help write it.
Reviewing your own PR is underrated. I do this with most of my meaningful PRs, where I usually give a summary of what/why I'm doing things in the description field, and then reread my code and call out anything I'm unsure of, or explain why something is weird, or alternatives I considered, or anything that I would catch reviewing someone else's PR.
It makes it doubly annoying though whenever I go digging in `git blame` to find a commit with a terrible title, no description and an "LGTM" approval though.
> requiring my team to review their own PRs before they expect a senior developer to review them
I'm having a hard time imagining the alternative. Do junior developers not take any pride in their work? I want to be sure my code works before I submit it for review. It's embarrassing to me if it fails basic requirements. And as a reviewer, what I want to see more than anything is how the developer assessed that their code works. I don't want to dig into the code unless I need to -- show me the validation and results, and convince me why I should approve it.
I've seen plenty of examples of developers who don't know how to effectively validate their work, or document the validation. But that's different than no validation effort at all.
> Do junior developers not take any pride in their work?
Yes. I have lost count of the number of PRs that have come to me where the developer added random blank lines and deleted others from code that was not even in the file they were supposed to be working in.
I'm with you -- I review my own PRs just to make sure I didn't inadvertently include something that would make me look sloppy. I smoke test it, I write comments explaining the rationale, etc. But one of my core personality traits (mostly causing me pain, but useful in this instance) is how much I loathe being wrong, especially for silly reasons. Some people are very comfortable with just throwing stuff at the wall to see if it'll stick.
That is my charitable interpretation, but it's always one or two changes across a module that has hundreds, maybe thousands of lines of code. I'd expect an auto-formatter to be more obvious.
In any case, just looking over your own PR briefly before submitting it catches these quickly. The lack of attention to detail is the part I find more frustrating than the actual unnecessary format changes.
Why would you are about blank lines? Sounds like aborted attempts at a change to me. Then realizing you don’t need them. Seeing them in your PR, and figuring they don’t actually do anything to me.
> Yes. I have lost count of the number of PRs that have come to me where the developer added random blank lines and deleted others from code that was not even in the file they were supposed to be working in.
That’s not a great example of lack of care, of you use code formatters then this can happen very easily and be overlooked in a big change. It’s also really low stakes, I’m frankly concerned that you care so much about this that you’d label a dev careless over it. I’d label someone careless who didn’t test every branch of their code and left a nil pointer error or something, but missing formatter changes seems like a very human mistake for someone who was still careful about the actual code they wrote.
I think the point is that a necessary part of being careful is reviewing the diff yourself end-to-end right before sending it out for review. That catches mistakes like these.
> I want to be sure my code works before I submit it for review.
No kidding. I mean, "it works" is table stakes, to the point I can't even imagine going to review without having tested things locally at least to be confident in my changes. The self-review for me is to force me to digest my whole patch and make sure I haven't left a bunch of TODO comments or sloppy POC code in the branch. I'd be embarrassed to get caught leaving commented code in my branch - I'd be mortified if somehow I submitted a PR that just straight up didn't work.
It’s cultural. It always seemed natural to me, until I joined a team that treated review as some compliance checkbox that had nothing to do with the real work.
Things like real review as an important part of the work requires a culture that values it.
For me the friction of dealing with licenses would make it hard to fully integrate a commercial package into my routine. Commercial developers have to decide how they expect a product to be used, so they can allocate finite resources. This invariably imposes limits on users.
In my case, trivial uses are as important as high-visibility projects. I can spin up a complete Python installation to do something like log data from some sensors in the lab, while I do something in another lab, and have something going at my desk, and at home. I use hobby projects to learn new skills. I've played with CircuitPython to create little gadgets that my less technically inclined colleagues can work with. I encouraged my kids to learn Python. I write little apps and give them to colleage. I probably have a dozen Python installations running here and there at any moment.
This isn't a slam on Matlab, since I know it has a loyal following. And I'm unaware of an alternative to Simulink, if that's your bag. And Matlab might be doing the right thing for their business. My impression is that most "engineering software" is geared towards the engineer sitting at a stationary workstation all day, like a CAD operator. And this may be the main way that software is used. Maybe I'm the freak.
When I was a physics grad student ~35 years go, this was called "the birth control problem. I had every intention of going into industry. I described it to my dad who got his PhD in the 1950s and he said it was the same back then. But there's a perennial "this time it will be different."
It wasn't the same in the 1950s. When it became really clear to me how dire the long term job situation was when I getting my PhD in the 1990s I started combing through issues of Physics Today and noticed that the field and academia as a whole was explosively expanding from 1920-1968 or so and there was a sudden crisis in the late 1960s, with an echo in the late 1970s and also when I was in in the late 1990s. (Physics Today said I had 2% odds of getting a permanent job even coming from a top school)
I had one day when I'd posted a Java applet to the web that got 100,000 impressions and getting so much attention for that and so little attention for papers that took me a year to write made me resolve to tell my thesis advisor that I was going to quit. Before I could tell him, he told me he had just a year of funding for me and I thought.. I could tough it out for a year. People were shocked when I did a postdoc when most of my cohort were going straight to finance.
My mental health went downhill in Germany and I stomped away, in retrospect I was the only native English speaker at the institute and I could have found a place for myself for some time had I taken on the task of proofreading papers and I can easily imagine I could have made it in academia but heck, life on a horse farm doing many sorts of software development has been a blast.
One big disruption in the job market was that mandatory age-based retirement was outlawed. This created a span of several years when there were virtually no retirements.
I should have mentioned that my dad's degree was in chemistry, and it might have been a different vibe. But the production of PhDs at a rate faster than they could be absorbed by academic hiring was a thing. My dad (and mom, she got her master's in chemistry) went into industry too, so maybe I was lucky to have good role models.
I'm a long time user of the Arduino IDE for third party boards such as the Teensy. Recently I've switched to Platformio for coding. So I should be satisfied with never needing Arduino's cloud service.
But Adafruit points out a problem, which is that the cloud service is the only available option for students using school-issued Chromebooks. I can confirm that a school-issued Chromebook is likely to be set up to lock out access to any programming tools. We wouldn't want children to learn coding after all, right?
I think relying on a corporation to preserve our freedom to code is a bit too optimistic.
Chromebooks and iPads are both completely unsuitable for digital education in my opinion. They can be decent tools for education using digital resources, but that is something different.
To "force" someone to develop on a Chromebook is like giving someone a bicycle to become a race car driver.
That said, I usually flashed my arduinos and used bare metal C. Ironically I think it makes many things easier to learn and understand, provided you have a programming device.
What does a "digital education" look like, specifically?
Having spent several years teaching kids to code everything from games to lightbulbs on Chromebooks, I can confirm that there are certainly difficulties - but they're tradeoffs. I could spend my time coming up with a way to work through the platform restrictions, or I could spend my time maintaining a motley crew of devices and configurations. Having done it both ways, they both have different pain points.
You really can't compare a Chromebook with an iPad. On a Chromebook that I bought and that I fully own I can enable the Linux system and install whatever I want on it (it runs in a VM and it is a full Linux system). The iPad is artificially crippled for programming by Apple.
School IT departments are unlikely to allow this. Even if they don't have technical restrictions, they'll have policies that prohibit it (at least my kids' school district would).
School-issued devices are generally intended to be similar to devices a corporation would provision for non-technical workers.
Honest question, if you buy (just a hypothetical, I assume most parents can't afford to buy one) a Chromebook for your kid that will be used in school, do you have to lock it down or can you enable the Linux system (assuming that you want to do that and that your kid is interested in learning to program).
I think an old PC would be more useful then a chromebook to a kid interested in learning to program; also it avoids dealing with a School District IT Department, which have to defend themselves from all kinds of attack from annoying kids and parents, so are probably more technologically conservative then the average IT worker.
So my advice would be: Don't bother trying to provision a chrome-book to connect to some school network. Use a school-issued chromebook for school stuff (if that's what they issue...), use a normal PC for extracurricular learning.
For the record: my kids are in elementary school, and are issued lenovo laptops running windows. They are locked down to the point where they might as well be chromebooks; kids have unprivileged accounts and are allowed to run very few programs. This is as it should be; those computers are for a very specific purpose, and are not general purpose toys.
> a School District IT Department, which have to defend themselves from all kinds of attack from annoying kids
Indeed, when I was in school, the WiFi networks were very poorly secured, so it was easy for annoying kids to get their own computers onto school networks if other students were using school-issued laptops around them. Annoying!
I never said they did a great job keeping the network secure, I only meant to imply that they tend to default to "no" when asked for any kind of technical permission.
Schools typically don't allow BYOD policies because of support costs and equitability between students. Assuming a school district even did allow this, they would only allow the student to use a managed Chrome profile and the school's device policy would lock out the Linux VM option and everything else that might become an in-class distraction.
If a kid wants to learn how to program, they're going to have to bring their own separate computer and it will be treated about the same as bringing their smartphone to class, i.e. not allowed except during very specific times, there would be concerns about liability of damage or theft from other students, and they probably wouldn't allow it on the school networks.
Can confirm no-BYOD policy is typical. I had to whine directly and without invitation to school principal to get an exemption for daughter. The trouble with no-BYOD is the kid must bring the school-controlled Chromebook home and connect to the home network for homework (which often requires Chromebook). Many US middle and high schools have an IT department of 1 or 2 people; it introduces abuse risk I think schools in general are not appreciating.
I see the problem with Chromebooks and cloud stuff more generally as being that it's difficult to see the productive use-case of programming outside just shuffling a bunch of data around. If your program's not actually doing something useful, it seems like it'd be difficult to imagine a career in it. -But if a kid can get a relay to trigger via button and then maybe via web interface and then maybe automate it, I think that opens the world of hacking up to them. You know, for $10, they can have a fully-solar (w/battery) thermometer or whatever they want -- the thermometer can feed a thermostat to energize a relay coil to start a heater or whatever.
-But I might be outlier, because in school we had robotics class a lot of kids were pumped for, but I was confused because we never did anything useful with them; it was more like an art class, except at least in art class we baked ashtrays for our parents. -But what am I supposed to do with a 5-watt robot that follows yellow tape?
> (just a hypothetical, I assume most parents can't afford to buy one)
It used to be that high school students were required to have a graphing calculator. These had to be purchased by the student (iow by their parents) and without factoring in 20+ years of inflation costed more than some Chromebooks available today. I suspect there were (and still are) financial assistance programs as i've known students living below the poverty line and they were able to meet that requirement.
Most larger school systems (if they allowed it at all) would end up "locking" the device as if it were one of theirs for the duration, just like some companies allow you to bring your own laptop or phone, but it becomes "as if it were theirs" while it is managed.
Support costs, mainly.
A small school that does its own IT is more likely to be flexible.
Your personally owned Chromebook isn’t comparable to a school issued Chromebook at all. They’re more locked down and useless than a stock iPad. Kids cannot install Linux on them.
You commented in context of digital education. The point is that your argument of Chromebooks comparing better to iPads doesn't apply in this situation. In fact they're often worse because schools deploy the cheapest, lowest common denominator Chromebooks with slow CPUs, horrible screen resolutions, inadequate RAM and terrible battery life. Kids hate them. The fact that good and uncrippled Chromebooks exists doesn't help them at all. A 5 year old iPad is likely a better experience and a more capable OS and device than a new Chromebook issued to students this fall, but the warranty and repair costs for schools dealing with careless kids don't add up to less so they get the cheaper option.
> On a Chromebook that I bought and that I fully own I can enable the Linux system and install whatever I want on it (it runs in a VM and it is a full Linux system).
Do you really own it ? Can you install linux or BSD _instead_ of ChromeOS ?
Yes[1], depending on the chromebook / chrometablet it will have varying levels of support for even swapping the firmware and running standard linux/BSD. Sometimes you will need to open up the laptop for a jumper/screw to adjust for enabling firmware flashing. Others its just turning on dev mode first.
Apple has long provided tools for teaching kids how to code. Including lessons targeted at kids in middle schools.
> young coders are asked to assist these characters achieving simple goals by coding simple instructions. As challenges become more difficult, more complex algorithms are required to solve them and new concepts are introduced.
Even then an ipad is not good. An Ipad is good for digital art and thats it. For the same money you can buy a computer capable of 3d modeling, digital art and a drawing tablet buy some paint brushes and clay to do real life art.
Your work Chromebook is completely incomparable to a school issued Chromebook. It's doubtful that your employer locks you out of literally everything that would allow you to develop software on-device. See my other comments in this thread.
People of HN-age are assuming that school Chromebooks are anything like the Apple-IIe or other computers they had "in the computer lab". Those machines had a "purpose" - but they were wide open for investigation by those who wanted to.
They're not. They're locked down as hard as they can be.
> It's doubtful that your employer locks you out of literally everything that would allow you to develop software on-device.
In strongly regulated industries, it is not unusual that you are indeed strongly locked out of this, and you need to create special requests to get access to the specific functionalities (as an exception) that you need for developing software on-device.
Right, many people have to treat their local computer as a thin-client and do everything through a WebEx session or similar means, which makes the local device irrelevant. Or if you're regulated but have to be specifically exempted and allowed to work in a way that schools would never permit, then in that case you'd not be arguing in good faith that kids are able to learn to code and develop on a Chromebook since they can't.
> Or if you're regulated but have to be specifically exempted and allowed to work in a way that schools would never permit, then in that case you'd not be arguing in good faith that kids are able to learn to code and develop on a Chromebook since they can't.
No, I just wanted to show that your claim
> It's doubtful that your employer locks you out of literally everything that would allow you to develop software on-device. See my other comments in this thread.
simply does not hold in practice.
--
Addendum: Additionally, from my school experience, rather the attempts to circumvent "abitrary" restrictions on the computers which were set up by the school made you a good coder. :-)
I sense that your claims and suggestions here strongly suggest that your school experience is not a recent one where you were issued a locked down Chromebook.
I would encourage you to expand your lived experience here. Circumventing "arbitrary" restrictions today will burn a hardware fuse, brick it for actual school allowed purposes and cost your parents $170 to resolve. The age of innocently hacking on school property is long gone.
>
I sense that your claims and suggestions here strongly suggest that your school experience is not a recent one
Of course.
But nevertheless, I have a feeling that the central difference is not in "recent or not", but in the fact that older generations were simply much more rebellious in not wanting to accept the restrictions set on the school computers and willingness to do everything imaginable to circumvent them.
I recently had a great time developing on ESP-32 directly in VSCode/Cursor and using the Arduino CLI. I believe very similar in concept to Platformio. I've always hated being limited to the Arduino IDE.
> We wouldn't want children to learn coding after all, right?
Why aren't we teaching kids vibe coding? I've been told that is the future after all, and junior devs will never be needed ever again. All they need a webpage interface to an LLM to provide data and customer demographics for AI companies.
Because typically we don’t leap to teach kids things that are speculative.
We in the industry might see AI progressing to where cube vibe coding is just as real as using spreadsheets rather than paper ledger books, but it is years out, and teaching kids on v0.1 tools would just be frustrating for teachers while likely teaching kids all the wrong things.
My kids are absolutely learning how to use AI: every syllabus has guidelines for when AI usage is acceptable (it's not a blanket prohibition against), and they talk about both the pragmatic and ethical implications of it.
A school lesson where the teacher babbles about wishy-washy AI topics needs a lot less preparations by the teacher than a school lesson where he teaches scientifically sophisticated topics.
Chrome can read and write to a serial port using the Web Serial API. There is also the WebUSB API, but I haven't tried that. I wonder if that would be enough to flash boards on a locked-down Chromebook?
Technically any recent Chromebook can run Linux in a VM if enabled from settings. Now, I don't know if most schools forbid this, but since it is running in a VM it is safe to use for sure.
The reason people use Chromebooks is because they want to minimally manage the devices. The Chromebooks being locked down is ENTIRELY the point of using them in the first place...That and because Google.
1. Vibe coding a microcontroller firmware project. I'm using "vibe coding" in jest here because I'm actually an experienced coder, but this was a chance to try using the AI coding assistants for a clean sheet project at minimal risk. I'm going on 63, and could easily finish my career without AI, but where's the fun in that?
One amusing thing I've noticed is that every time the AI generates code with a hard coded hexadecimal constant, it's a hallucination. My son suggested feeding all of the chip datasheets into the AI and see if the constants improve.
2. Finally converting my home semi-hobby electronics business (something like a guitar effects pedal) to machine assembled circuit boards.
reply