JollyGreen_sasquatch

joined 2 years ago

We are a vdi shop too, so we have to be sure performance is at least as good. With a few other complicated setups, it is non-trivial to test alternatives.

Migration is usually shutting down the VM, exporting it from VMware, and importing on the other side. We have some really generous maintenance windows usually(ie basically 36-48 hours every weekend) and I would still expect it to take a year to migrate everything if we went all in.

We would be over 50 hypervisor hosts too, san connectivity, shared disks, and GPU accelerated vdi. It's a lot to eval for each option and test it all works and figure out the caveats.

[–] JollyGreen_sasquatch@sh.itjust.works 2 points 5 days ago (2 children)

Many places are still looking at options, and costs of switching. Where I work still is, even though we have a large Linux server fleet already. I expect this is on a 3-5 year plan to ramp up switching to something else for most companies that are going to switch.

A method not yet mentioned is by inode, (I've accidentally created filenames I didn't know how to escape at the time like -- or other command line flags/special characters)

ls -li

Once you get the inode

find . -type f -inum $inode -delete

So the cost of a tow or mobile mechanic + cost of a replacement starter + cost from alternate transport or loss of wages would take years to make up for, each.

[–] JollyGreen_sasquatch@sh.itjust.works 1 points 3 weeks ago (1 children)

Transmission loss/attenuation only informs the power needed on the transmission side for the receiver to be able to receive the signal. The wireless networks I am talking about don't really have packet loss (aside from when the link goes down for reasons like hardware failure).

I mention Chicago to New York specifically because in the financial trading world, we use both wireless network paths and fiber paths between the locations and measured/real latency is a very big deal and measured to the nanoseconds.

So what I mention has nothing to do with human perception as fiber and wireless are both faster than most human's perceptions. We also don't have packet loss on either network path.

High speed/ high frequency Wireless is bound by the curvature of the earth and terrain for repeater locations. Even with all of the repeaters, measured latency for these commercially available wireless links are 1/2 the latency of the most direct commercially available fiber path between Chicago and New York.

Fiber has in-line passive amplifiers, which are a fun thing to read about how they work, so transmission loss/attenuation only applies to where the passive amplifiers are.

You are conflating latency (how long it takes bits to go between locations) with bandwidth (how many bits can be sent per second between locations) in your last line.

[–] JollyGreen_sasquatch@sh.itjust.works 3 points 3 weeks ago (3 children)

The speed of light through a medium is what varies, since I have to deal with this at work, and the speed of light through air is technically faster than the speed of light through fiber. But now there is hollow core fiber that makes this difference less.

Between Chicago and New York the latency of the specialized wireless links commercially available is around about 1/2 of standard fiber taking the most direct route. But bandwidth is also only in gigabits/s vs terabits/s you can put over typical fiber backbone.

But both are faster than humans can perceive anyway.

[–] JollyGreen_sasquatch@sh.itjust.works 6 points 1 month ago (1 children)

There are modern labdocks with usbc

[–] JollyGreen_sasquatch@sh.itjust.works 27 points 2 months ago (2 children)

The before first unlocked state is considered more secure, file/disk encryption keys are in a hardware security module and services aren't running so there is less surface for an attack . When a phone is taken for evidence, it gets plugged into power and goes in a faraday bag. This keeps the phone in an after first unlock state where the encryption keys are in memory and more services that can be attacked are running to gain access.

In Linux everything is a file. So modifying files is all you really need. The hardest part is how to handle mobile endpoints like laptops, that don't have always on connections. Ansible pull mode is what we were looking at in a POC, with triggers on VPN connection. Note we have a large Linux server footprint already managed by ansible, so it isn't a large lift for us.

Tried this at work and discovered it only really works on vscode and probably eclipse. Other IDEs claimed support but it was found to be unusable.

I do agree mostly with your point here, but I think you can limit the scope a bit more. Mainly provide a working build environment via one of the mentioned tools, since you will need it anyway for a ci/cd pipeline. You can additionally have a full development environment that you use available for people to use if they choose. It is important that it be one regularly used to keep the instructions up to date for anyone that might want to try to contribute.

From my observations as a sys admin, people tend to prefer the tools they are familiar with, especially as you cross disciplines. A known working example is usually easy to adapt to anyone's preferred tooling.

view more: next ›