The paper (linked from the article) has a photo of the actual tablet in question, which was apparently discovered circa 1900.
The paper (linked from the article) has a photo of the actual tablet in question, which was apparently discovered circa 1900.
SQL, where injection is still in the top 10 security risks
This is absolutely true, but it’s not what it looks like on the surface, and if you dig into the OWASP entry for this, you’ll see they talk about mitigation.
You can completely eliminate the possibility of injection attacks using well-understood technologies such as bind variables, which an ORM will usually use under the covers but which you can also use with your own queries. There are many, many database applications that have never once had a SQL injection vulnerability and never will.
The reason SQL injection is a widespread security risk, to be blunt, is that there are astonishingly large numbers of inexperienced and/or low-skill developers out there who haven’t learned how to use the tools at their disposal. The techniques for avoiding injection vulnerability are simple and have been well-documented for literally decades but they can’t help if a lousy dev decides to ignore them.
Now, a case could be made that it’d be better if instead, we were using a query language (maybe even a variant of SQL) that made injection attacks impossible. I agree in principle, but (a) I think this ends up being a lot harder than it looks if you want to maintain the same expressive power and flexibility SQL has, (b) given that SQL exists, “get bad devs to stop using SQL” doesn’t seem any more likely to succeed than “get bad devs to use bind variables,” and © I have too much faith in the ability of devs to introduce security vulnerabilities against all odds.
it would be great to “just” have a DB with a binary protocol that makes it unnecessary to write an ORM.
Other people have talked about other parts of the post so I want to focus on this one.
The problem an ORM solves is not a problem of SQL being textual. Just switching to a binary representation will have little or no impact on the need for an ORM. The ORM is solving the problem that’s in its name: bridging the conceptual gap between an object-oriented data model and a relational data model. “A relational data model” isn’t about how queries are represented in a wire protocol; instead, it is about how data, and relationships between pieces of data, are organized.
So, okay, what if you get rid of the relational data model and make your database store objects directly? You can! NoSQL databases had a surge in popularity not too long ago, and before that, there have been lots of object databases.
What you’re likely to discover in an application of any real complexity, though, and the reason the industry has cooled somewhat on NoSQL databases after the initial hype cycle, is that the relational model turns out to be popular for a reason: it is extremely useful, and some of its useful properties are awkward to express in terms of operations on objects. True, you can ditch the ORM, but often you end up introducing complex queries to do things that are simple in SQL and the net result is more complex and harder to maintain than when you started. (Note “often” here; sometimes non-relational databases are the best tool for the job.)
And even in an object database, you still have to know what you’re doing! Storing objects instead of relational tuples won’t magically cause all your previously-slow queries to become lightning-fast. You will still need to think about data access patterns and indexes and caching and the rest. If the problem you’re trying to solve is “my queries are inefficient,” fixing the queries is a much better first step than ditching the entire database and starting over.
You’re not missing much power with jOOQ, in my opinion as someone who has used it for years. Its built-in coverage of the SQL syntax of all the major database engines is quite good, and it has easy type-safe escape hatches if you need to express something it doesn’t support natively.
Totally fair! They did a good job of making the main storyline playable as a solo player, but the core gameplay loop is still unmistakably MMO-style and not to everyone’s taste.
I love that song in particular because (very minor spoiler) it works both as background music and as diegetic music. In the story, that boss is trying to entice you into going permanently to sleep and living in a dream world where you’ll achieve all your goals and desires, while becoming her meat puppet in the real world. When you’re playing the game rather than watching it with onscreen lyrics on YouTube, you are only sort of half-listening to the song while you focus on the battle, so you don’t realize right away that the battle music is the boss singing to you to seduce you into her flock even while you’re fighting her.
Final Fantasy XIV has a diverse soundtrack and a terrific story, but it is a huge time commitment. The story starts off pretty slow and takes a long time to build up.
A few boss fight themes as examples:
As a fan of Ms. Marvel, I enjoyed the main campaign well enough, but all the MMO stuff is obnoxious. Luckily you can mostly ignore it and go through the campaign missions single-player. I uninstalled it after getting to the end of the story.
This is spot on. I would add one little wrinkle: you not only have to accept that not everything works like it does in your home country, but you have to accept that not everything should.
You can be the kind of expat who spends all day griping about how much worse things are in your new home than your old one, or you can be the kind who shifts their mindset such that the new country’s ways become second nature.
jOOQ is really the best of both worlds. Just enough of an ORM to make trivial CRUD operations trivial, but for anything beyond that, the full expressive power of SQL with added compile-time type safety.
And it’s maintained by a super helpful project lead, too.
I think the value of standups depends a ton on the team’s composition and maturity.
On a team with a lot of junior or low-performing devs who don’t have the experience or the ability to keep themselves on track, or a team with a culture that discourages asking for help as needed, a daily standup can keep people from going down useless rabbit holes or unwittingly blocking one another or slacking off every day without anyone noticing.
On a team of mostly mid-level and senior devs who are experienced enough to work autonomously and who have a culture of communicating in real time as problems and updates come up, a daily standup is pure ceremony with no informational value. It breaks flow and reduces people’s schedule flexibility for no benefit.
When I’m thinking about whether it makes sense to advocate for or against daily standups on a team, one angle I look at is aggregate time. On a team of, say, 6 people, a 15-minute daily standup eats 7.5 hours of engineering time a week just on the meetings themselves. The interruption and loss of focus is harder to quantify, but in some cases I don’t even need to try to quantify it: when I ask myself, “Is the daily standup consistently saving us a full person-day of engineering time every week?” the answer is often such a clear “yes” or “no” that accounting for the cost of interruptions wouldn’t change it.
Especially infuriating when the other person is in a very different time zone. I once worked on a project with a partner company in a time zone 10 hours ahead of mine and it was common for trivial things to take days purely because the other person insisted on typing “Hi,” waiting for my “Hi, what’s up?” response (which they didn’t see until the next day since our hours didn’t overlap), and then replying with their question, which I didn’t see until my next day. Answering the actual question often took like 30 seconds, but in the meantime two or three days had gone by.
I came to believe they were doing it on purpose so they could constantly slack off and tell their boss they were blocked waiting for my answer.
My frustration is less with the people who are late and more with the meeting host making the rest of the attendees sit around twiddling their thumbs waiting for the late person. Unless the late person’s presence is the point of the meeting, just get started and let them catch up.
“We’ll wait a few more minutes for person X to join, then get the meeting started,” like the other ten people who made the effort to show up on time deserve to be punished with extra meeting time for being responsible. Bonus points if this causes the meeting to run a few minutes long.
This is an obvious case of “headline is written by someone other than the article’s author.” The article compares ANA and Singapore to United Airlines specifically. The headline is the only place where “US carriers” are mentioned as a generalization.
Personally, I dislike flying United but other US carriers like JetBlue are fine.
I’ve been under a few times but the most memorable (in one sense) was when I had some minor surgery as a kid. From my point of view, it was like teleportation: I was in the operating room, I blinked, and I was suddenly on a bed in a completely different room. No sense of the passage of time.
My intuition is that it’s probably in about the same range as the broadcast networks, but I have no numbers to back that up.
I don’t think it can be significantly higher or lower: if the cancellation rate were significantly lower, “streaming services always cancel after one season” wouldn’t have caught on as a perception, and if it were significantly higher, it wouldn’t be as easy to find multi-season streaming shows as it currently is. But is it slightly higher or lower? I have no idea.
I actually did run some numbers on this at one point and found that the cancellation rate on network shows has ranged from 30-50% for the last 70 years, with the average number of seasons hovering just under 2. Reddit post with graphs and sources.
Running the same numbers for streaming services is trickier, and I couldn’t figure out a reliable way to get a good data set to analyze. But even so, the numbers for broadcast TV are high enough that it would be numerically impossible for streaming services to, say, be 3 times more likely to cancel a show after one season.
It is bizarre to me that people act like streaming services invented the concept of canceling series after just one season, or believe that it’s a new practice. Broadcast TV has regularly done exactly the same thing for its entire history. Streaming services almost always at least release all the episodes rather than leaving some of them unaired.
The “developed or supplied outside the course of a commercial activity” condition is part of why people are up in arms about this. If I’m at work and I run into a bug and submit a patch, my patch was developed in the course of a commercial activity, and thus the project as a whole was partially developed in the course of a commercial activity.
How many major open-source projects have zero contributions from companies?
It also acts as a huge disincentive for companies to open their code at all. If I package up a useful library I wrote at work, and I release it, and some other person downloads it and exposes a vulnerability that is only exploitable if you use the library in a way that I wasn’t originally using it, boom, my company is penalized. My company’s lawyers would be insane to let me release any code given that risk.
I don’t think Netflix actually cancels shows after two seasons any more often than other networks do.
Somehow people got it into their heads that Netflix is far more cancel-happy than its competitors, but if you look at the numbers, traditional TV networks have had like a 50% cancellation rate for decades.
Even TOS was cancelled after two seasons!
If Netflix is more prone to cancelling shows at all, which I’m not convinced is even true, it can’t be by an enormous margin.