Yet another win for Systemd.
Target disk mode is fantastic, I’m thrilled to see this coming to Linux
Worked in IT, target disk mode is a life saver when you have to recover data from a laptop with a broken screen/keyboard/bad ribbon cable and don’t want to take apart something held together by glue.
It’s a nice feature. I used it a few times on old Macs with external FireWire hard drives for booting a different OS or troubleshooting.
Soon we’ll be debating whether we call it systemd/linux or gnu/systemd.
Red Hat: “we put the D in your System”
I’m happy that this is coming to linux (I believe Nutanix has a great method to expose storage over IPs), but I would have liked if this was a bit more project/dependence agnostic.
I mean, it specifically is giving support for booting disks over an existing protocol to systemd. That’s pretty well within scope?
Oh, my gripe is not with Poettering creating a systemd service for it (for I cannot dispute that systemd wrappers such as this does make life somewhat easier), but I would have liked perhaps a more distribution agnostic method of running NVMe-TCP in a way that the OS would not have to be booted. I suppose I do understand the community’s support for this: systemd is used by most of the popular distributions, and writing a service in it will enable systemd to maybe interleave this between other processes and perhaps fulfill the goal of producing a block device on an L3 network without booting userland.
As one can probably surmise, I do not have a great understanding of how the process works - will have to figure out how MacOS did it first, and then about how Poettering implemented it. I think I’ll have a better idea of what the solution is geared towards.
Thanks for your comment!
I would have liked perhaps a more distribution agnostic method of running NVMe-TCP in a way that the OS would not have to be booted.
From the pull request:
This all requires that the target mode stuff is included in the initrd of course. And the system will the stay in the initrd forever.
I think that’s as minimal a boot target as you can reasonably get, or in other words you’re as far away from booting the OS as you can get.
So now the question is whether this uses any systemd-specific interfaces beyond the .service and .target files. If not, it should not take much effort to create a wrapper init script for the executable and run it on non systemd distros.
Thanks, that makes it easy to understand. Indeed, it doesn’t seem very dependent on systemd, which is great. I was aware that the project existed, and for a second thought that Poettering was trying to integrate it directly within systemd somehow whilst making improvements to it. I suppose that’s not the case, which is good.
And you’re correct, that is probably the easiest way to boot the minimum required resources.
Thanks.
“Magic was meant to serve men, never to rule over them.”
Pragmatism > all else.
Oh, another arm growing.
Yay, yet another storage protocol over the network.
Not a storage protocol over the network, but yes :P
“ via NVMe-TCP (in case you wonder what that is: it’s the new hot shit for exposing block devices over the network, kinda like iSCSI…”
So….?
The protocol already existed. This made it convenient to boot from it
So NVMe-TCP is yet another storage over network standard…. Regardless of making it work like this.
I guess if you had your way we’d still be doing token ring over twin-ax. Whatever
I see no flaw in this logic
This seems like a win for almost all distros
Link to the post (for accessibility and follow-up in the thread): https://mastodon.social/@pid_eins/111324093735348164
Pull request: https://github.com/systemd/systemd/pull/29748
Can someone eli5 pls?
“target disk mode”, which this claims to be taking a lot of inspiration from, pretty much turns your computer into an external harddrive - so you can connect another machine to it for direct access. This appears to be trying to accomplish the same, but over the network.
If you’ve ever stuffed up a machine so badly that the best idea you could come up with, was to take the harddrive out and work on it from another machine - this pretty much allows you to do that. But instead of taking the drive out and putting it an external drive enclosure, you just ask the stuffed up machine to act as the external drive enclosure.
Great answer
Oh okay. Thanks for the simple explanation :)
same, i have no idea what any of that means and i use runit
runit gang !
You assessment isn’t entirely correct as this is indeed related to systemd. Read the PR https://github.com/systemd/systemd/pull/29748
@TCB13 services aren’t systemd-related just because they are launched by systemd.
A service by itself shouldn’t be systemd, it should be implemented separately and run under systemd. However, this is using the systemd target subsystem which is a little more specific.
Exactly my point. Thanks.
@winterayars systems targets were formerly known as runlevels, and this particular one probably could also work with init= because what else could you possibly run at the same time?
You might be able to get away with just using init=
@winterayars then why the fuck is systemd involved?
“You might be able to get away without systemd” does not mean there’s no benefit to using it. There could be a management benefit (easily putting the system in different states) and/or it may be (considerably) easier to do it with systemd baking it.
If you had to (hypothetically) reimplement most of systemd’s core functionally to do it without and can do it trivially with then that sounds like you don’t like “the project named systemd”, an opinion that should not have an impact on the technical decisions.
(Edit)
Actually i didn’t throw in any specific reasons that respond to the question itself. Let me do that.
This feature is leaning on connecting the storage through networking, which makes sense. (Ideally you would do it like macOS and only let direct computer-to-computer connection run it for security reasons, at least by default.) That means you need a DHCP stack spun up, which systemd gives the project an easy way to do. In addition, any other features can also lean on other pieces of the OS through systemd. It’s just easier.
Lennart Poettering, being a lead on the systemd project, is targeting systems where systemd is the init system. That is, it’s the first actual OS process started. With this in mind, if you wanted to start this “storage target mode” before systemd you would have to implement a bunch of stuff, ex a custom DHCP configuration to get networking going. Then, of course, you have the systemd “OS level” networking and then, separately, the “storage target mode” networking–which may mean you have to then implement UI to connect the device to the network if you have a special network configuration.
If you wanted to set this up after the init system then… uh… well, that’s the implementation as it currently is being developed. It’s a systemd target because systemd is the init system in question. That’s what Poettering is doing, here.
There are probably more reasons why it makes sense to use systemd, but fundamentally systemd is the init system and it can solve problems for the project.
From what I see in the repo, this functionality is being built into systemd (in the same vein as something like systemd-resolved), and introduces a new target dedicated for the new feature.
Sure, you could probably rip it out and use it with your own init system, but that seems tedious to now scour the documentation to ensure your init system brings up the ‘dependencies’ launched at the preceeding systemd targets, so the NVMe TCP service can run.
Would be easier to just use another existing implementation IMO, most people running their own init systems probably want more than the bare minimum featureset offered by the services included in systemd’s package
How is it related? Is there something preventing the executable from running without systemd? Just providing a service and target file doesn’t mean anything if it can run without them just fine. If it came with a reference init script instead I don’t think people would be arguing that it’s part of sysvinit and that sysvinit is bloated.
Is this like booting over pxe? Is nvme tcp widely supported on motherboards?
No, this has nothing to do with your motherboard. Once you reach the boot menu you’ll be able to pick your OS and alternatively
systemd-storagetm
. If you chose the the latter then your disks will be available to other machines over NVME-TCP. Just like Apple.The problem of keeping comparing and doing analogies with apple shit stuff is that many of us have no idea what tech of magic apple does, so saying things like “just like apple” is a completely useless phrase that gives zero info whatsoever about anything.
It’s probably why we’re getting the tech almost 20 years late. Apple started doing this with FireWire
So I could mount and chroot over TCP to fix problems? Looks a little more complicated at this point than fstabbing an iscsi target, but I imagine that’ll improve. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-tcp_managing-storage-devices
Sweet.
The PR aims to make it easy and simple.
So when it’s booted it will just advertise the storage to the LAN over nvme-tcp protocol?
Not “booted”, you won’t be booting your full OS. It’s just an option on the boot menu that launches systemd and a small program that does the magic and nothing else.
But is it running at the same time as a an OS or is it just a device without an OS running, sharing storage?
So share drive / simplified NAS, no?
Kind of… but you’re directly accessing the hard drive like iSCSI does. Way less latency, no high (and slow) protocols like SMB are used.
NVMe/TCP is an extension of the NVMe base specification that defines the binding of the NVMe protocol to message-based fabrics using TCP. The rules for mapping NVMe queues, creation of NVMe-oF capsules, and the methods used to deliver the capsules over the TCP fabric are described in the NVMe/TCP Transport Specification. By binding the NVMe protocol to TCP, NVMe/TCP enables the efficient end-to-end transfer of commands and data between NVMe-oF hosts and NVMe-oF controller devices by any standard Ethernet-based TCP/IP networks. Large-scale data centers can use their existing Ethernet-based network infrastructure with multilayered switch topologies and traditional network adapters
- https://infohub.delltechnologies.com/l/nvme-nvme-tcp-and-dell-smartfabric-storage-software-overview-ip-san-solution-primer-1/what-is-nvme-tcp/
- https://nvmexpress.org/wp-content/uploads/March-2019-NVMe-TCP-What-You-Need-to-Know-About-the-Specification.pdf
- https://nvmexpress.org/answering-your-questions-nvme-tcp-what-you-need-to-know-about-the-specification-webcast-qa/
SAN. Not NAS.
So NAS without any controls. Yay?
trivial to set up NAS with minimal overhead, plus you can boot any pc into this once it’s standard, which would be nice for rescuing when you fuck something up: rather than fiddle around with rescue mode or digging out the drives you just boot into this mode and access the drives from your laptop or whatever.
It doesn’t sound easier than ventoy tbh.
So like, grubd boot menu? And from there I can boot over a location on my nas for example? I set up ipxe a couple weeks ago but it couldn’t load over my thunderbolt to 10g nic. Would this help?
So this is a service aimed at exposing disks as nvme-tcp boot targets on boot of the system? I mean I love it, I wonder if this could be used to help with a chicken and egg problem I’ve had with building clustered systems easier. So far I either need a running service to host a network file system (like NFS or CEPH), or I need local disks that bootstrap the clustered storage environment.
And why would this need systemd of all things? Should basically be doable over something like SSH / TFTP, right?
Not compelling to me. Gonna stick with runit and/or s6 on my Artix Linux systems at home. But you do you Lennart.
Same for me, but dinit