Building a New-to-me computer & want to move to Linux + virtual machine, any experiances?

Yeah, that I’m not sure about. I haven’t ever had to try that.

There are a lot of distributions nowadays that “look pretty” actually. It’s a matter of taste, though. Maybe check out Distrowatch.com. Look at the ranking list on the right side of the page. Click any distribution name to be taken to the description page, which usually has a desktop screenshot you can look at.

Compiz Fusion is what gives you transparency among other special effects. The actual window borders and colors can always be changed, but if you pick out a nice looking distribution from Distrowatch, then that part will have been done for you. IMHO, Ubuntu is nice looking. But, there are better looking desktops. You just have to look around to find what you ( and/or your family) like(s).

Ubuntu has you (and your Mom) covered. No need for another 3rd party app for “advanced file finder” functionality - it’s built right in.

It’s been a long time since I’ve had any virtual machine installed, so my experience if surely super outdated, but I remember performance of the virtual machine being prohibitive for anything other than light tasks. Of course, modern computers are many more times more powerful and virtualization is more efficient, so it might be feasible, my last computer upgrade made photoshop run like the wind, so who knows, you might not run into performance problems with it virtualized in your new hardware

I have looked through distrowatch in the past my biggest problem with it is you never know which distros will keep being supported down the road outside of the top handful. Once I pick a distro there is a good likelihood I will be using it for another decade so I want it to be supports naturally.

I actually liked the old ubuntu unity desktop more then the gnome they are moving to now. I tried the gnome version but was not very impressed which is why I have been considering mint this time around.

I will look into Compiz Fusion deeper, all I found in a quick google were videos from 2007 that looked like it just added some fancy alt-tab transitions and the like.

I will fully admit that linux file system is still one of the biggest hurdles I have with linux. I am used to seeing my hard drive and everything on it. With linux I just can’t seem to do that, everything in these funky folders that don’t seem to correlate with actual folder on the drive (If I type the folder into a terminal without fail it can not find it and I have to add a bunch of stuff before it by a copy/paste).

It just never made sense. I am very particular in how my files are organized and have some deep trees to keep it all sorted out.

Even with virtualbox running in windows I have been impressed with the performance in the virtual machines. My biggest hurdle thus far has been lack of SSD storage and lack of ram.

I’ll try to not make this a book but no promises. I’ve been running primarily linux for many years now and for a few years with a VM for some things, I still have a windows install used for some games. For the VM I use QEMU with a pass-through, which I think the pass-through problems are the same regardless but you have to be very careful how you setup the graphics cards so they can be isolated. They have to be on different busses and depending on the drivers and card brands you use not all combos work. I use a Nvidia host and AMD guest and I know an Nvidia guest will not work at least with a Nvidia host, and I can’t use my CPU’s Intel because it’s on the same buss as other things and my host card has to sit in one of the X8 PCI-E slots because the X16 is on it’s own bus which I can separate out for the guest card. Oh and you can’t install any new versions of Solidworks (like past 2013) on a VM, or at least this VM during the install it just says you’re not allowed to install it on a VM. Honestly moving forward my plan might be just a second computer with synergy or a KVM. Using a VM for advanced applications is a delicate balancing game of convenience and cost. Sure a second computer cost more but I need to buy a 2nd GPU anyway for pass-through and that’s a big chunk of the cost with most applications, and also need a nice stack of ram in the host to run two OSes at once. So peripherals are easier to pass-through but you still need to switch the monitor since the pass-through has to have it’s own input. If very small amounts of lag aren’t an issue then synergy is pretty good, haven’t used it much recently but it’s pretty seamless. Because the first rule of any linux thread is to say what distro you use, I use Arch and while it’s probably decreasing my life span from stress related to updates breaking things I can’t really see myself using windows as my primary OS. That being said it really depends on the person, as much as linux fanboys like to think it, linux is not for everybody but if it is you know it.

Yeah, I have heard about those issues with the GPU passthrough as well, I was not sure if it had improved any.

I did the 2 computer thing with a KVM for many years and it just got really annoying since I only need the second PC occasionally and so I left it off most of the time.

Synergy is an option if I could find a nice low powered system that would not use a lot of power leaving it on 24/7.

I have been thinking about getting something like this to use as a media box but not sure if it can handle H265 @ 1080p.

https://www.ebay.com/itm/222923789749

Yeah, I agree the Gnome desktop they’re using now isn’t the best (in fact, I said it above :wink: ). Honestly, even more than the old Unity, I really liked the (now way long gone) Ubuntu Netbook Edition desktop, which was (visually, at least) a pre-cursor to Unity. I think it was my all-time favorite desktop layout. But really, the coolest thing about Linux is how customizable it is. You could literally have any look you want - even a Windows or Mac -like desktop is possible.

As far as longevity, I couldn’t promise you that ANY of the now-existing distributions will still be around in 10 years, but at least the larger ones should be. With that, I’d expect most of the mid-size-or-larger off-shoots of those main distributions should also survive at least that long. In fact, most of those still maintain backwards compatibility to their parent. So, even if you were using some off-shoot of Ubuntu and the group responsible for that distribution were to dissipate in the near future, you could merge your computer back into main Ubuntu without too much effort, even keeping the apps and desktop look/feel that you have been using. The reason this works is because of how Linux distributions do “package management”, which also makes updates and upgrades faster, easier, and safer, not only for the OS, but for your apps as well.

Yeah, that is why I have been sticking to ubuntu / mint as my first choices up to now, pretty sure they will still be around for quite a awhile.

I will be honest, I have never used linux through lifecycle, I always end up reinstalling it long before that and don’t even bother transferring anything but my data from my old install.

You also bring up why I have considered just using ubuntu and then modding it to feel like I want, although I could not find a desktop a year or 2 ago with near the refinement and prettiness to make my family happy (or myself to be honest). They are all just so basic and boring, it makes it feel like an industrial workstation.

I tried to switch to linuix for a few months about 2 years ago but it was flat out rejected on this basis by everyone else and I was not even trilled with the desktop myself. It just didn’t seem as refined as windows 7.

Anyone know of a windows desktop clone that actually works? Or heck even mac is better then anything I have seen from linux when it comes to visual appeal and refinement.

Due to a development time required many of the “fancy” desktops are not always super reliable. Like in total I think KDE, or plasma as I guess they’ve now decided to call it is far nicer than windows or mac. So many nice features like the control over the windows, so for instance I have many programs that start automatically and they all have window controls so they are locked into a place on a monitor and that’s where they stay, while others are only semi locked or not locked at all. But with all these comes the fact it breaks a lot.

Depending on how much performance you need there are many small computers that can run advanced programs sufficiently and most modern computers use very little power in sleep mode and wake quite quickly. But if using synergy the host has to be on to use the guest but you can just make the host the one you use more, still will have monitor switching concerns, I always just did it by giving each their own monitor But with my VM I just hit the input switch button on my middle monitor and have the VM pass-through using the VGA input, it’s reasonably quick and easy and I can still use the other monitors normally in linux at the same time. No matter what it’s going to be complicated with linux and windows running at the same time in some way.

Yep, that is the price with a lot of Linux apps, if they look good they generally do not work good. If they work good they generally don’t look good lol.

My number one biggest complaint with Linux is the iron grip it has with the terminal. I grew up using dos and could enter terminal commands with the best of them but boy when you are entering a new system and do not speak the language it is a royal pain since everything has to be just right. I wish they would make it so that you can do everything with the GUI and only use the terminal if you want to like Windows. It would make the learning curve way easier.

I am seriously considering that PC above, I figure worst case I could give it to my grandmother or find another use for it if it didn’t work for media streaming.

Your personal files go in /home/texasace, and can be organized however you want. Compared to what Windows uses, Linux’s file systems have fewer restrictions and more capabilities, though you don’t have to use any of the extra stuff.

Outside of /home/texasace, you probably won’t have to care what goes where, but if you want to understand things better there is a well-documented filesystem hierarchy standard explaining what everything is. Here are some of the basics:

  • /home: Everything which belongs to users. Probably the only area you have to care about.
  • /tmp: Handy scratch area for temporary files. Gets erased each time the system reboots, or periodically, depending on how things are configured.
  • Stuff necessary for booting and basic system function:
    • /etc: Config files. Similar to the Windows registry, I suppose, but doesn’t need any special tools to access.
    • /boot: Generally just the bootloader config, kernel, and a compressed image of drivers needed to boot the rest of the system.
    • /bin: The most important utilities (binaries, executables) needed for basic system functions. Anything named “bin” contains binaries or executables, and form the command line’s primary vocabulary.
    • /sbin: Core system binaries, necessary for system tasks but not normally used by users.
    • /lib, /lib64: Core libraries. Non-core ones go in /usr/lib.
  • Applications and non-core stuff:
    • /usr: Where applications go, mostly. Programs which aren’t necessary for basic system operations, but are still useful for other purposes.
    • /var: Writable area for program data which doesn’t belong to any specific user. Like, system logs, database contents, package metadata, and so on.
    • /opt: A place for third-party stuff which doesn’t follow the usual filesystem conventions.
  • Hardware and kernel access:
    • /media or /mnt: Holds default mount points for removable media and other extra drives. USB drives and DVDs and stuff generally go here.
    • /dev: Raw access to hardware devices. Everything in Linux is a file, including your drives, peripherals, sound card, and so on. If you need to partition a disk, this is where to find the disk.
    • /proc: Realtime info about kernel internals, system status, and running processes. Take a peek at meminfo and cpuinfo, for example.
    • /sys: Realtime info about kernel internals and system status, V2.

If you have multiple hard drives, instead of giving them letters, they have mount points. They can be mounted wherever you like. For example, if you have a lot of games from third parties, you could mount a hard drive on /opt so you have an entire drive dedicated to that sort of thing. Or if you have a lot of videos, you could mount a drive at /home/texasace/videos. Or you could copy the Windows drive-letter method and simply mount extra drives at /d, /e, /f, and so on. Or under your home directory, at /home/texasace/d, /home/texasace/e, /home/texasace/f.

One handy tool for keeping things organized is symlinks. These are what shortcuts in Windows were trying to be. Basically, it lets things appear at two or more different parts of your file hierarchy. So you could have drives named /d, /e, and /f, and organize your personal files into games/, videos/, flashlights/, or whatever, and shuffle around the physical location independent of the logical organization. So maybe videos/ is stored on /e/videos, but you access it from /home/texasace/videos. And then /e runs out of space so you move it to /f, but the entry in your home directory can remain. Just point it at the new location.

Another way to use symlinks is to add multiple views of your data. For example, I have music/by_artist/ and music/by_genre/. Both contain the same songs, but they are organized differently. And there is physically only one copy of the song data. I use by_artist as the primary organization method, and by_genre contains symlinks to specific artists, albums, and songs which fit various genres. This doesn’t need to be a one-to-one mapping.

Kind of a tangent, but if you like to keep things organized, symlinks will probably be useful.

That’s a big cultural difference which isn’t likely to go away. In Windows, pretty much everything can be done with a GUI but only some of that is available at a command line. In Linux OSes, pretty much everything can be done from a command line but only some of that is available through a GUI.

For the most part though, everything regular people use computers for can be done without ever typing into a terminal.

I use terminals for almost everything, but I’m weird. GUIs are usually easier to learn but less powerful, and I prefer the latter. GUIs also tend to require periodic relearning, since they change every few years as interface fads come and go, but command line stuff tends to last for decades. For things I only do once, or things which are visual in nature, I prefer a GUI. But for repeated tasks, in the long run, a CLI tends to be less work. I usually follow a rule of three… the third time I have to do a task, I write a script to automate it. Then I never have to do it again. And that’s something which isn’t usually possible in a GUI.

Not what you asked, but I built my 64 bit Linux Mint pc years ago when I got a new motherboard from gigabyte for $19.00. This is my main machine I am using now for just about everything. Total cost with SSD, 8GB ram and a G3258 was $170 usd. Because of that, I built another cheap windows machine. I like keeping them simple and separate instead of using a VM and they are both lightning fast.

Thanks for that, I never saw a nice breakdown of the file system like that before. It makes a lot more sense like that. /Dev is what I have been looking for it sounds like, I am just much more used to seeing the whole drive in it’s raw state. Plus most of the time I have been trying to work with windows drives and it has confused the heck out of me to not just see the drives like I did in windows.

Symlinks are also useful, I use them in windows a fair amount although only when trying to make something happen that normally should not. I am old school, I like to know exactly where my files are, what drive ect. I don’t even use RAID much for this reason.

Yes, the command line has it’s place and I understand it better then most having grown up with it. Although I think growing up with it is part of why I like GUI so much now, it gets really old always having to type in commands, getting 1 letter wrong and then having to start over.

Plus like you said, for things you do not do very often the GUI is much simpler since you can just click the button and don’t have to memorize the command.

For me it is more a question when will someone add the missing GUI commands to the GUI. Sooner or later it will happen if lunix is to ever move into the home computer market in any big way. I am a massive fan of Linux ecosystem, privacy, stability and security.

Yeah, I have done multiple systems for many years. There were times I have 5 separate computers running at my desk.

Now days I only need to use the other system when doing a few specific tasks and it makes more sense to use the extra horsepower of a single system on all of these. Worst case I will stick with windows and run linux in a VM like I am now.

Yeah, I think dual booting in some form (either two OS on the same computer or two computers) will be the best choice for TA for now.

I found dual booting to be even more of a hassle then multiple systems personally. Worst case I will simply use a windows host with Linux virtual machine like I am now, I just prefer the stability and security of Linux for the host.

I should probably clarify what “raw” means. It gives access to the contents of the disk as a stream of bytes. Not files or directories or anything like that, just bytes. To view the actual filesystem contents, this stream needs to be mounted somewhere.

So, /dev/sda (SCSI Disk A) is the first hard drive, totally raw. /dev/sda1 is the first partition on that drive, also raw. To access its contents, it’s typically mounted on “/”, the root of the filesystem namespace. This is basically the equivalent to drive C:\ in Windows.

Then the drive letters go up from there… /dev/sdb, /dev/sdc, /dev/sdd, etc. If any partitions exist, they’ll be indicated by a number after the letters.

To see what is currently mounted, run “mount”. Or click on whatever GUI option is equivalent to the “My Computer” icon. Many GUIs put mounted disks on the desktop by default. (As a caveat, “mount” shows all sorts of stuff including internal bookkeeping filesystems, so to see only actual disks, one can do “mount | grep /sd” or “mount | grep ^/dev”.)

Two things:

  • Up arrow. Previous commands can be recalled and edited by pressing Up.
  • Tab completion. Most things can be auto-completed by pressing tab. It also will often display a list of complete-able items by pressing tab by itself. Or after displaying a list, keep pressing tab to make the shell auto-complete the items one at a time in order, if you don’t feel like typing any more letters.

Let’s say I wanted to play some music. Here’s what I could type:

  1. Fn-Home
  2. cd mu<tab><tab>a<tab>
  3. <tab>
  4. In<tab><enter>
  5. mp -shuffle /*

Here’s what each step does, and what I see onscreen:

  1. A terminal opens.
  2. Expands to:
    • cd mu
    • cd music/
    • cd music/by_
    • cd music/by_artist/
  3. Press tab one more time to display a list of artists.
  4. Expands and then changes directory:
    • cd music/by_artist/Infected_Mushroom/
  5. Tells my favorite music player to play all files below the current directory in random order. The actual program changes over time, but I can always access it, whatever it is, by running “mp”. It’s a shell alias.

Then let’s say I stop that and decide I want to play an album called “The Voyage”, but I don’t recall the name of the artist who made it.

  1. q
  2. …/*/*Voy*<enter>
  3. <up><up><enter>

What this does is:

  1. [Q]uit the music player.
  2. Search for the album and go to its directory. I could optionally press tab before enter, to make the shell expand it to its full name, “…/Haywyre/2012.The_Voyage”. Also, normally there would be a “cd ” at the beginning of this line, but since changing directory is such a common operation, some shells allow the user to omit the “cd ” part entirely.
  3. Recall the “mp -shuffle /*” command and run it again.

However, since this particular album is meant to be played in sequential order, I’d probably just run “mp *” to play everything in order. But if I wanted to recall the shuffle command later, I could do it without having to scroll through all previous commands, by hitting Ctrl-r then “shuf” to search command history for “shuf”.

I hope this gives a relatable example of how a modern CLI works for a common task, and how little actual typing is required.

I am well aquatinted with the up arrow, I used that like some people use backspace lol.

Although tabbing is new to me, I think I heard about it once but totally forgot about it. I have kind of blocked my command line days out, in the early days the up arrow did not even work that I remember, or I did not know about it. Spelling has never been my strong suite so I got (and get) really annoyed with the syntax and spelling /typing out of commands.

Mostly though it is a matter of if you don’t use it you will loose it. I used to fly through commands but have forgotten most of the more advanced ones by now.

I will never argue that the terminal doesn’t have a place, just that it should not be a requirement for anything past the very basic uses of a computer. Learning the commands has a large learning curve that many would not make it though.

I don’t see me moving past windows 7 myself and the support for windows 7 is coming to a close rather quickly. So I have to make linux work at some point.