How to get Tap To Click back on your Debian 9 XFCE Linux install

We are at an early point in Debian 9, and therefore many of the “downstream” distributions in Linux-Land these days.

Debian just made Debian 9, Stretch, the Stable version.  It also came out with an upgrade to 9.1 a couple days ago.

Since my own laptop was a Debian 9.0 install, I had a problem.  The track pad no longer did a “tap to click”.  It was there in the earlier versions, and removed in a Debian 9.0 install.  They migrated to libinst.  It promises to be new and shiny and do many new things but most of these things are in the future – or so my lack of Tap To Click would show.

I don’t use many of the more complex mouse options with my laptop.  It’s a non touch screen, Lenovo Thinkpad T530.  I heavily use Tap to Click so I want it back.  My other laptop, a Lenovo Thinkpad Yoga S1 had the same problem.  After a lot of research, this was shown to be a design decision.  Debian is my go-to operating system distribution due to the absolute depth of software and documentation out there.

So I set about to “fix it”.

DISCLAIMER:  I was able to do so on two computers but with some thrashing around.  I will give here the information that I have, but that thrash may make it less solid than my usual “cook book recipe” guarantee of any technical articles that I write.

Give it a shot.  If it works, let me know if you did anything different and I’ll mention it here.

Background – the documentation for Debian 9, Stretch, is still incomplete.  The files that I created had to be placed in Xsession.d and the directories that Debian gave were either missing or empty for me.  What they have is correct for the earlier versions and the docs need to be proofread.

Or I went crosseyed and got the wrong damn directory…

Since this blog is a place I put documentation for my own uses (Linux as well as recipes and photography), I’ll put it here.  I’d rather not have the heat of an official inquiry on me since I live in Florida and it is quite hot enough as it is.

First:  Create a 50-synaptic.conf – the file should probably not be there on a “clean install”

1) edit /etc/X11/Xsession.d/50-synaptics.conf

2) at the top merge (Copy and Paste) in the following lines:
Section “InputClass”
        Identifier  “touchpad catchall”
        Driver  “synaptics”
        MatchIsTouchpad “on”
MatchDevicePath “/dev/input/event*”
        Option  “TapButton1”  “1”
        Option  “TapButton2”  “2”
        Option  “TapButton3”  “3”
# This option is recommend on all Linux systems using evdev, but cannot be
# enabled by default. See the following link for details:
#       MatchDevicePath “/dev/input/event*”
EndSection
Second, copy that file to /usr/share/X11/xorg.conf.d/50-synaptics.conf
Third open terminal and sign into root to install a package:
apt install xserver-xorg-input-synaptics
Fourth: reboot.
On return, you should have tap to click working.  Entering “synclient TapButton1=1” on a command line should give you information for further research.
You may diagnose what the touchpad is doing by running as root “synclient”.
Entering “synclient TapButton1=1” on a command line should give you information for further research.
Further options such as multitouch, double finger tap for scrolling, and coast speeds and so forth are described in detail in the Debian Wiki Synaptics touch pad page at https://wiki.debian.org/SynapticsTouchpad

Using a Manifest to Recreate your Linux System Selectively

Last week, I had finally had enough of not being able to hibernate my computer.  There was enough “chaff’ and weird things happening.

I did realize that I could create a list of everything I had, and then get Linux to import that list and reinstall all my programs.

That would be my Manifest.

I did it knowing that I could be reintroducing the problem that I created with the old system.

I was right.  So I did it over, selectively.

And it worked.  Hibernate and video crashes were problems, and after 17 consecutive hibernate cycles over two days of active use, I’d say I am done.

This was a whole lot simpler.  You see, this scary Manifest thing is nothing more than a text file that is generated within “Synaptic” that contains all the markings of the programs that I installed over the 7 years that I had that Linux install.

I went through that file and deleted everything that I did not expressly know what that particular program was, or anything I knew I did not want.

Easy except the file was in chronological order or … well, lets just pretend it was and leave it at that.  Basically it can be sorted in alphabetical program order simply.

One line in Terminal, just like everything in Linux, would solve it.

Assuming the Manifest is called /home/bill/Desktop/Manifest.txt

In Terminal, issue this command string on one line:

cat /home/bill/Desktop/Manifest.txt | sort > /home/bill/Desktop/SortedManifest.txt

Now you’re in alpha order, and it makes it easier.

I did delete anything that started “lib” as well as KDE, gnome, and mate since I strongly prefer XFCE to all of those.  My choice, no big deal

I simply edited the file in Mousepad, and deleted all things I did not want.

If you want the long form description of all of this, Last Week’s Post is at this link.  However the short form is here:

1) on original install create a Manifest within Synaptic Package Manager.

a) open synaptic

b) Select File, Save Markings As

c) navigate to the place you want to store this file, and give it a name.

d) Tick the box “Save full state, not only changes”

e) Click Save.

2) Verify that your manifest is on removeable media.

3)  Remove any unwanted programs from the Manifest

4) save your important files from the operating system on removable media
/etc/samba/smb.conf,

/etc/apt/sources.list,

the Manifest file

5) Install a fresh copy of your Debian Based operating system on the destination computer.

Debian, *Ubuntu, Linux Mint, whatever…

6) Get the destination computer “up to date” and stable.

7) compare and manually update your /etc/apt/sources.list file from the original computer

copy the installed version to a save file

I copied my own from the original computer in its place and updated

then you will need to update the PGP keys for one or more added such as http://www.deb-multimedia.com

8) install the manifest by

a) open synaptic

b) Select File, Read Markings

c) find and open the manifest.txt file

d) click open

e) verify needed markings have been imported into Synaptic, and click Apply.

f) there will be additional libraries incorporated into your install list due to any new dependencies.

8) you’re done.  Verify everything is OK.  Live with it for a while.

You will want to add in programs like libdvdcss to allow DVDs to play, Samba to share files, but these things will need to be done individually.

9) File Sharing.  I used the Debian Wiki entry at https://wiki.debian.org/SambaServerSimple

a) apt install samba samba-client

b) edit /etc/samba/smb.conf  – or put the one in from the old computer assuming you had it working.

c) add your samba users:  smbpasswd -a USERNAME

replace USERNAME with the correct name, and it will ask you for the password

d) restart Samba:

    # /etc/init.d/samba restart
    or, if you are using systemd
    # /usr/sbin/service smbd restart

Migrating To A New Linux Computer With A Manifest

With Windows, you buy a new machine.  You copy a few things off the old one that you know are most important.  You make a token effort to re-create your old environment.  Then something Microsoft did gets in the way or you can’t find your original discs and you just keep it in the closet because you are afraid you will have lost all your data.  Because that’s what your buddy did down the block.

I’ve been told anyway.  I’ve also been told that most people have spare Windows computers that are taking up space.

Mac people can use a backup from Time Machine.  I’ve actually done that, and it is pretty slick.

I ran into a very different problem  My backups were perfect clones of the original.  But my original was “broke”.

Note:  This migration process is SO very easy that it takes about a half hour of actual hands on keyboard “work” and about 3 hours of processing time.  I have done this a couple times in a short span of time and am now getting “Creative” with the process.

Narrative:

Once upon a time, I installed Linux, and it was good…

Actually I installed Debian.  I figured that if there are so very many distributions of Linux that were forked from Debian, that Debian itself was safest.

I think I was right, no proof, just my opinion.  I have done my distribution hopping and had a machine in 1995 that was still being used in 2010 with CentOS 4.  Still stable, I just had much better hardware by then than my old Panasonic Omnibook with a Pentium 3 chip in it.  Yes, a 15 year long stretch with a computer is a long time, and I was the third owner of the machine.  It was my “pet”

I ran Debian 7 along side my Windows machines, and slowly found myself using Linux more than Windows.  I still use windows today, Windows 8.1 specificially, and I have an XP virtual machine With The Embedded Patch so I can get windows updates, but I don’t think I have run that within a month.

The only thing I use Windows for now is Photoshop, and really there are Linux programs like Gimp and Inkscape that will do what I need.

My original install was in my Dell.  Seven years ago in 2010.  First generation i7.  Dell Precision M4500. Blasted thing was built like a tank.  It loved, and once again loves, Linux.  I lived there for a year or more.  Then I was given upgrades, a couple times.  The original install went from machine 1 through 4.  Along the way Debian got upgraded to Debian 8, then recently 9 although I joined 9 back when it was “Testing”.

You see with Linux, you can clone the hard drive, take the clone, plug it into a new machine, and it just may work.  All you need is a USB caddy for the destination drive and as long as your drive names line up it works.  dd if=/dev/sda of=/dev/sdb bs=4M conv=noerror,sync

For the most part it did but there were weird video affects and strange hibernate and resume problems. This cropped up as a result of taking machine 1’s operating system and making it stable in machine 2 and 3 and later 4.

The actual process:

So here I am, creating from scratch Son Of Original Install. Debian 9 with XFCE4.  Oh, and a lot of extra “baggage” that I don’t need but it is easier that way.

I decided that I would create a list of programs that I could reinstall on the New Machine and see how it works.  Also this is done with me in XFCE4.  We Linux People are if nothing else, flexible.  If it is not where I said it is, poke around a bit.

Step 1.  New Machine, is a Thinkpad T530, and gets a clean “Bare Metal” install of Debian 9. I ended up doing it a couple times, and so far the only weirdness is that it insists that I do “sudo su” if I want my terminal session to be and remain root.  They also renamed the network devices that have been used for decades.  So when I get to the network tweaks that I will have to do, I may have to edit a configuration file. Most likely samba.conf.

Success.  I’m typing this from that machine now.

Step 2.  Create Manifest and install it from Synaptic. (Menu, System, Synaptic)

Step 2a.  On Original, open Synaptic.  After giving it the password for Root create a manifest by clicking “File, Save Markings As” and ticking a box at the bottom of the window that says “Save full state, not only changes”.

Synaptic created the file with everything and in next step, it will place everything where I need it.  Yes, it will add a lot of software I don’t really need, but with Synaptic and Linux, I can purge all that stuff with a simple “apt purge” and it will remove it all, completely.  Put it on a chip or USB stick and place it on the new machine.

You can edit the file and delete anything out that you know won’t be needed, but you will have to trust Synaptic to realize what you’re trying to do.  Best if you did the removal in the next step.

Step 2b.  On New install on the new machine, open Synaptic and select “File, Read Markings”.  Tell it where that file is.  It will read it in and select all your “markings” from the manifest and instruct Synaptic to later install.

I did that in bed.  It was 8GB worth of upgrades on a replacement for my old 7 year lived in install.

Here is where I am second guessing and should have removed the other programs and window managers that I don’t use.  I like XFCE4, it’s light, fast, and configurable.  Others prefer KDE or Gnome.  I have them all installed.  Why not, it’s a seven year old install.  If you remove it before telling Synaptic to update, Synaptic will get rid of the chaff along with it.  I didn’t want to, I wanted “What I Had On The Old Install”.

Step 3.  Bring over my home directory.  I cloned the Original install on a backup drive.  I took that drive and plugged it into an external case.  Plugged that into the USB port. It is copying.

Step 4.  Live with it.

I have to go with this new install for a while spotting problems.  And I haven’t gone back to the old machine since.

Step 4a) The first one was I had to be able to play DVDs.

Change, as root, the /etc/apt/sources.list file by editing it and adding four lines:

#2017-07-08 to add libdvdcss

deb http://download.videolan.org/pub/debian/stable/ /
deb-src http://download.videolan.org/pub/debian/stable/ /

Then install as root by “apt install libdvdcss2”.

VLC worked by playing Futurama in Spanish.  Leela is a babe.

Step 4b) network shares on windows are not yet accessable.

SAMBA was installed on Original, and it was happy.  It took a lot of twiddling to get that there.  Luckily I could copy over and merge it into my bare bones samba.conf file.  I saved the new one as installed, then copied the one over from Original, then restarted.

Fixed the access to my network shares.  It did not fix the share I had on the new machine.  I’ll work on that.

Step 5) Conclusion is that this process works.

Worked.  I’m on day 6 of all of this.

Two problems cropped up:

1) The network share on my “new” machine still hasn’t been fixed but I will deal with that later

2) Flash does not work.  Flash, as a platform, is dying. The only place it irks me is on www.imgur.com when I run across a short video to play.  I’ll look into that at my leisure.

Step 6) Epilogue:

Furthermore…  I got bored and did it again with another machine.  I had it working once the updates happened.  It’s a first generation Thinkpad Yoga S1 and has its own problem.

That’s the thing, there will always be quirks.  Be prepared.  They happen because the new computer has different hardware than the original one.  You may need drivers, and you may need to remove software.

After all you still have your old machine and its backup, so you can go back if you want.  This “migration” is completely safe to your original data.

Docker on Debian Linux – Getting a Canned Container for WordPress

The Setting for this article:


If there is no outside nonsense going on, this is the way that the whole open-source and Docker universe is supposed to work.  You grab a container that has your operating system in it from a repository and run it as needed.

That means you grab it from the cloud.  When you have it you can do with it what you want and then either save it or throw it away.

All that is great, but it does go against my own Project Management training.  You did not make it so how do you know it does not have any problems with it like viruses or worse.

So Warning:  Only run a container that you create or one that you know to be safe. 

I am assuming that the container I am working with is safe because it is listed as official.  To be honest, I can’t say I know enough about Docker and this particular container to say that assumption is true.

The benefits of running a container from a repository:

In the case of this particular one, it saves me a lot of time.

I do not have to create the container, I can just use it.

I can save it on my computer, or not – it is up to me.

I can modify it as I like.

Not a long list, I’m sure you can add other items to it.  It took me about 10 minutes to grab the container and save it to my hard drive.  It takes about 2 hours to install Debian, then another couple hours to install LAMP, and more time install WordPress… and configuration time.  Someone at Docker did it for me.  This is why containers got popular.  In a large organization, you will have a standard container that gets cloned dozens of times for the designed purpose.

Since we’re going to simply use it, here are the steps to get a “canned” container onto your computer for WordPress.

 

1) Get your environment ready

Start your Docker compatible computer and make sure Docker is up and running.  A simple command like docker images will tell you the list of images you have available and what they are called.

2) Search the repository for the image you were given

What you need is a list of containers.  These are “out there on the cloud” and available at Docker for you to grab.  They may also be on your own cloud server if you’re at a business.  Typically someone will tell you “what to use” to get your job done.

We need a WordPress image.  You want to search.  docker search wordpress  will give you a list of all images that have wordpress in the name field.  Remember that “case counts” in Linux – all things are Case Sensitive as a Standard.

The one I feel safest choosing is the first one.  “wordpress”.  It has an official tag, 1601 stars, and I honestly am simply guessing.  Like I said in my preamble, if you want bulletproof security – create one from scratch.

3) Download the image with the “docker pull” command

This is pleasantly easy.  The image itself is called “wordpress” and all you need to get the latest image is to enter on the command line “docker pull wordpress“.  Docker will go out to its repository and make it available to itself.

At this point, you have in your hot little computer’s hands a copy of Debian Linux with WordPress.  If all you want to do is poke around and destroy at the end, you can stop reading, you’re done.

4) Verify that you have the image available to Docker

You can easily do this with another docker image command:

Having the wordpress image show on the REPOSITORY list proves things worked.

5) Prove that the image runs in Docker

You have it. Now how do you run the beast? 

First, check that images list.  There is a field called “IMAGE ID” in the list.  That is what Docker knows the images are called.  The Name is just a friendly name that you can change if you have a mind to.
Second execute the container using that IMAGE ID.  You will also want to be able to do something inside, so run /bin/bash as well.  That will allow you to control the container.

For my copy of wordpress the number to run is f6ae044a5122.  The command to run is “docker run -i -t f6ae044a5122 /bin/bash”  Your IMAGE ID will vary based on your list.

Notice that the prompt changes from root@elk to root@f6ae044a5122.  This tells you your computer changed and you are now “inside” the container.  You can enter normal bash commands here.

I am purposely getting out of the container with an “exit” command.  When you exit the container, it stops.

Finally, I need to know what docker called that container when it downloaded it.  Its name in the “NAMES” field is the key.  For me it is 8bb814e82c48.  I will need this to re-start the container in the next step.

6) Starting your Docker container, getting the container up to date.

The last step was to get out of the container.  That puts you back onto “your” computer.  You’re “local” now.  That is important because you will want to go back into docker to make sure you can.  It is an extra step, but it allows you to be careful.  Following that you will want to update the container.

You first need to start the container.  That is done with the docker start command.  My container is called 8bb814e82c48, and that is used here.  I do this by starting it with “docker start 8bb814e82c48“.

It’s running but you have to attach  the container.  In this case, I was able to have it drop me into a /bin/bash shell, automatically.  I do this with a “docker attach 8bb814e82c48” command.

Now that I am in the container, I want to update the container to current Debian – get all the software inside the container up to date.  This is done in the traditional way with the following three commands:

  1. apt update
  2. apt upgrade
  3. apt dist-upgrade

Finally it is necessary to get out of the container by entering an “exit command at the command prompt.

All except the exit are in the next graphic.

7) Commit your docker container to your local hard drive and give it a friendly name

That’s all great.  You have gotten the container up to date.  You need to be able to shut down the computer and make the container available for you when you come back from what ever you are doing out in the real world.  Right?

This is done with a few steps.

First you need to commit the container and give it a name.  Then you can verify your actions with an image.

Your container is the one you have been working with.  In my case it is 8bb814e82c48.  You need to commit this to the hard drive within Docker on the local machine.  I enter the command “docker commit 8bb814e82c48 wordpress“.  This gives the container the NAME of “wordpress”.  Terribly generic.  If you are running a couple containers at once, you will want to give it something more specific and meaningful.

Verify that Docker knows that it exists in its own table by entering a “docker ps -a” command.

Finally you can do a “docker images” command to show the list of containers you have access to.

8) Running your local copy of the Docker Container

Now that you have returned to your computer, or have stopped Docker, you are going to want to go through the motions of restarting it again.

First, the “docker images” command will give you the information you need.  It will tell you docker is up and running, and give you a list of containers you will need to do your thing.  The container is 8bb814e82c48 for me, your number will vary.  This corresponds to the container we committed to disk earlier.

Second, you can run some commands to start the container, attach to it, and verify that it is responding.  To start the container, you enter a command of “docker run -i -t 8bb814e82c48 /bin/bash“.  Attach to the container This will also put you inside the container and allow you to enter bash commands.

Finally, you can exit out of the container and go back to your local machine.

This is all listed in the following graphic.

9) How to actually access this specific container from the Docker Repository

Here is where I end for now.

What you have achieved is to grab a container from Docker and get the thing up to date.  You were able to save it locally, hopefully.  Finally you proved that it is verifiable and repeatable by running it again.

That gives you a server that you don’t know how to use.
I’m in the same boat at this point.  There is a long list of things you can do with the container, if you know how to get into it.  This specific container is a Docker produced container.  They have documented the steps for you to get access to it.

I will be returning to this and producing a cheat sheet in time as I get more used to the whole process.  I’m used to Debian and LAMP and doing it all “live” on a “real” computer (bare metal for the VMWare crowd), but this is still a learning process for me.

So once I get more helpful information, I’ll be back.  After all I have been at blogging since 2007.

The link from docker is here:

https://hub.docker.com/_/wordpress/

Good luck!

Docker on Debian Linux – Why and The Install of it All

If you want just the instructions, Skip to the break.  This is here basically so that I can do this again later.

The setting:

For almost all of what I do, I run Linux.  Specifically Debian Linux.

It runs much faster, has most of the same programs you’re used to on a Mac or a Windows PC, and is about as stable as an operating system can be. It can run some Windows programs in Emulation (WINE) but that’s not the point. I’ve got what I need if I stay within Linux, natively.

Some Debian Linux computers have “uptime”, time since they were last restarted, in years – not days or weeks.

I update things when I want.  I make things how I want them.  I change things how I want.  If there is one thing about Linux that Mac and Windows users don’t get to do is customize things the way that they want.

The backstory:

My blog resides in two places.  On www.ramblingmoose.com and on a WordPress hosted site at ramblingmoose.wordpress.com as a backup.

I really don’t care for how my WordPress site looks, so I want to change it.  Being someone with more years in IT Software Development Project Management than I care to admit to, I will do it “offline” and not on the live site.

Furthermore, I have a client in Los Angeles.  His website was developed on WordPress by me, and I have a backup.  I’m finished with the site, but I thought it might be “fun” to see if I could get it to work here on my own computer.

The reasoning:

My own main computer running Debian I am happy with.  Actually that is an understatement.  I don’t want to slow it down by loading up server software, a LAMP stack, and things to slow it all down.   I could create a VMWare or Virtual Box virtual computer and do the LAMP stack there, I’ve done that a couple times before, but running a full VM for something like this felt “overkill” and “heavyweight”.

What Docker Does:

Docker will allow me to share some of my computer by running a pared down version of Linux inside what they call a Container.  It is not a full virtual computer, so it should run faster, and since it is not a full computer it will not effect my apparent speed – in case I forget to “turn the damn thing off at night”.

The Goal:

Get a Docker Container up and running.  The container will have a web server and WordPress software running configured for my use.


 

Installing Docker:

This is adapted from the official Docker instructions found at this link.

This will get the base Docker software installed on a Debian system.  Your system should be “up to date”.  It should be running fairly current software.  As I am writing this March 2017, Docker will run on Debian 7, 8, and 9.  9 being “Stretch” or “Testing” at this point, 8 being “Jessie”, and 7 is “Wheezy”.

My own personal thought is that if you aren’t running at least “Jessie”, get yourself upgraded to current software.  After all, within a month or three of this writing, Stretch will become “Stable” and the official up to date current software.  It’s easy.  I started with Wheezy, migrated to Jessie, and am currently running Stretch.

Open a terminal session and sign in as root with “su”.

Add transport module to allow Docker to grab what it needs via HTTPS:

Add the Docker GPG key:

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add –
Verify that the key ID is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88.

apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <docker@docker.com>
sub   4096R/F273FCD8 2017-02-22

Add the Docker repository:

add-apt-repository \
“deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable”

Update your apt repository lists:

apt update

 Install Docker:

apt install docker-ce

 

At this point, Docker is installed.  They set up a container that you can run to verify it called Hello World.

docker run hello-world

Docker is installed.  The software will run from the command line, as root.  It can be configured to start automatically by entering in a command:  

systemctl enable docker

However I have not yet done it because I am not convinced I want to run this every time I boot.  I am being conservative with my system resources, but to be honest I have not noticed it slowing me down in the slightest.   Since I will only be running this intermittently,  I probably will not be running it.

Since this tends to be my own mental scratch pad, the way to disable docker at boot is:

systemctl disable docker

This and more tweaks to how it runs are found at Docker’s own help file for post install.

Conclusion:

Obviously, this is something that is incomplete.  I will be returning with more when I go to get my container started.  I need a Container with Debian, a LAMP Stack, and WordPress.  A ready made version of this exists, and I will try that first – ready made from Docker itself!

On the other hand, my own normal IT Project Management curiosity tells me that I need to make one on my own.  So I’ll work on that later.

First step, getting it installed, worked.  Next I will get onto that other stuff… later.

Cloning a Hard Drive With Linux

Yeah well calling it Linux means I most likely lost 97% of the market.


Windows people don’t realize that there is a painless way to get their windows computer to do some of this stuff – a Live Linux Distribution like Ubuntu.  If you get a live disc working, you can copy this shell into it, then follow the instructions.  It should work.

Mac people may even be able to run this natively.

Maybe.  Depends if PV is Mac Friendly, if not, convert the PV line to a copy of your choice.


A Live Linux can be “burned” to a USB stick or to a DVD and your computer can be booted from that.


And now you know!


But none the less…

What this is basically is my own shell.  I use this to completely back up my computer.  All the drive specifications are found and known, and do not change.

I run fdisk -l as root and use the information in there to edit the shell script to change things as needed.

This assumes that you know what your drive devices are, are willing to edit a shell script to make your own changes as is, then have an external USB hard drive slightly larger than your boot device.  My boot device is /dev/sda and most likely yours is as well.

This assumes that you have a second drive sitting in your chip reader.  If not, you can comment out the line that copies it to the hard drive.

This assumes that you have room enough to do everything.

I am doing this on Debian Linux, however the commands here are so very generic that you should be able to run this on most “full” distributions of Linux.  Debian, Ubuntu, Linux Mint, Centos, Fedora and the like come to mind.

Standard Internet Warranty – I make no warranties and it is at your own risk.  If you lose data, it is on you.  I take zero responsibilities for any miscoding or changing or whether a magic dragon comes out of the skies and takes you onward to valhalla.  Really.  None at all.

I will say that I ran this exact shell this morning and it worked for me.  You WILL have to change the file specifications to fit.   

Finally:

  • My boot drive is a 240gb SSD with about 120gb free.
  • My chip has about 12 gb worth of data on it.
  • Debian thinks that the chip is called “128GB” and it typically comes up in the file manager (thunar) on /media/bill/128 GB/

Prerequisites:

Installed versions of

How it runs:

  • This must be run as Root in Terminal.
  • This will pause after each step with an OK message in the Dialog box.
  • For me, the entire shell runs in about 2 hours on my i7 laptop with a USB 2.0 external hard drive.

First the shell in its entirety through to the end comment:

#! /bin/bash


#backup.sh from http://www.ramblingmoose.com

dialog –no-lines –title ‘Run This As Root’ –msgbox ‘This shell will backup SDA to SDB\nYou must click OK after each step so watch this.\nYour Disaster Recovery will thank you!’ 10 70

dialog –no-lines –sleep 3 –title “update your sources” –prgbox “apt-get -y update” 10 70
dialog –no-lines –sleep 3 –title “update your software” –prgbox “apt-get -y upgrade” 10 70
dialog –no-lines –sleep 3 –title “update your distribution” –prgbox “apt-get -y dist-upgrade” 10 70

arg1=”‘/media/bill/128 GB'”

dialog –title “copying the chip to the drive” –prgbox “cp -avr $arg1 /home/bill/128GB” 10 70

(pv -n -i 2 /dev/sda &gt; /dev/sdb) 2&gt;&amp;1 | dialog –title “Backup SDA to SDB” –gauge ‘Progress…’ 7 70

dialog –title ‘Message’ –msgbox ‘Cloning is done, click ok to clean up and end’ 5 70

dialog –no-lines –sleep 3 –title “Removing the copy of the chip” –prgbox “rm -r /home/bill/128GB” 10 70 

dialog –no-lines –sleep 3 –title “Synchronize your drives” –prgbox “sync” 10 70
#end backup.sh

To actually use that mess…

  • Copy the entire text and paste it into your favorite text editor.
  • Save the file with a “.sh” extension somewhere you will be able to get to it – in your path.
  • Change the mode to executable – chmod 0770 backup.sh
  • Change the owner to root.  You never want to use this as a regular user – chown root backup.sh
  • Change the group to root.  chgrp root backup.sh
  • Run the shell as root: sudo ./backup.sh

Now, each line in excruciating detail!

—- Run the programs using bash interpreter

#! /bin/bash

—- I’m signing my work here

#backup.sh from http://www.ramblingmoose.com

—- This puts up a message box

dialog –no-lines –title ‘Run This As Root’ –msgbox ‘This shell will backup SDA to SDB\nYou must click OK after each step so watch this.\nYour Disaster Recovery will thank you!’ 10 70

—- The next three steps gets your distribution to date.  Don’t want this, comment it out

dialog –no-lines –sleep 3 –title “update your sources” –prgbox “apt-get -y update” 10 70
dialog –no-lines –sleep 3 –title “update your software” –prgbox “apt-get -y upgrade” 10 70
dialog –no-lines –sleep 3 –title “update your distribution” –prgbox “apt-get -y dist-upgrade” 10 70

—- Store the directory that Linux mounts the chip to in “arg1”  If no chip to backup you can comment this.

arg1=”‘/media/bill/128 GB'”

—- Wrap the actual work of copying the chip out to a dialog box.  The flags “-avr” say copy the whole drive in $arg1 recursively to the destination.  If no chip to copy, comment this line.

dialog –title “copying the chip to the drive” –prgbox “cp -avr $arg1 /home/bill/128GB” 10 70

—- This line does the real work.  Now that you copied your chip out to the hard drive, clone the actual hard drive.  The flags on pv tell it to report to stdout the percentage of work done so that dialog can show a pretty gauge.  Ahh, so pretty!

(pv -n -i 2 /dev/sda &gt; /dev/sdb) 2&gt;&amp;1 | dialog –title “Backup SDA to SDB” –gauge ‘Progress…’ 7 70

—- Copy is done, it is time to clean up message

dialog –title ‘Message’ –msgbox ‘Cloning is done, click ok to clean up and end’ 5 70

—- remove the data that you copied from the chip from the hard drive to be neat. if no chip, comment this out.

dialog –no-lines –sleep 3 –title “Removing the copy of the chip” –prgbox “rm -r /home/bill/128GB” 10 70 

—- Your work is done, make sure you flush your cache by doing a “sync”.

dialog –no-lines –sleep 3 –title “Synchronize your drives” –prgbox “sync” 10 70  

#end backup.sh

Netbook Server – Sharing An External Hard Drive In Linux

So if you have followed my instructions, you now have a:

Computer that runs Debian Linux
http://www.ramblingmoose.com/2016/02/the-netbook-server-installing-debian-or.html
Computer that you can look into using Remote Desktop
http://www.ramblingmoose.com/2016/02/the-netbook-server-you-need-to-be-able.html
Computer that you can share part of the local hard drive
http://www.ramblingmoose.com/2016/02/the-netbook-server-how-to-actually.html
Congratulations.  You now have a file server!

If you followed those directions, it also installed a bunch of other programs that will let you do other things.  I noticed that something called “CUPS” was installed, and that will let you plug a printer into the same machine and act as a “Print Server” or a “Network Printer” – if you can find the instructions on how to configure it.

Debian and Raspbian both come with enough that you could use that machine as your one and only daily driver computer.  The browser is called “Iceweasel” and is Firefox, rebranded.  You have Libre Office to write letters, work with spreadsheets, and make presentations that are all compatible with Microsoft Office.

Yes, it really is, I use it every day.  No, you don’t have to pay for it.  Ever.

There are more apps, and I would suggest looking into some of the software that is out there, all free.  If you start “synaptic” from your terminal as root or “sudo synaptic &” you will find so much free software that your mind will fog up and get tired before you find everything you want.

But that all is just the preamble to this discussion.  You came here to share an external drive.  This is like any other shared drive on the network, you have to have it plugged into the server (USB Port on your netbook), you have to tell the computer where it is, and you have to tell it how it is to be shared.

Remember, I am trying to write this for a Windows audience so I’ll go as basic as I can.  You Windows folks are in a new world, and you will want to have this go well.  If you are a Linux expert or even intermediate, you may find this needlessly wordy.   Not to worry, you’ll be right.

One Step At A Time.  Divide and Conquer.

First step – Make sure you can read the drive from Linux.


Before you get anywhere, start the computer.  Log in.  Get to your desktop.  Then plug in the drive.

Start your terminal session by clicking on the (start) “Applications Menu”, then click on Terminal.  Sign in as root by entering “su” and your root password.  You will eventually need this

Now, launch the file manager by clicking on the (start) “Applications Menu”, then click on “File Manager”.

In the left pane of the file manager you will see Devices, Places and Network.  In “Places” your external drive will come up with a little eject arrow to the right of it.  Click on the icon for the drive.  A little wait icon will start to rotate.  When it is through it will do one of two things:

Success is if you are dropped into a view of whatever files are on the disc.  It means that all the drivers are in place.  Most likely this drive is something called “vfat” or “fat32”.  Remember this for later.

Failure is if you get a big ugly warning message up.  That means that you don’t have the drivers for the format that the drive has on it.  Most likely you will have to install the set of drivers called “ntfs-3g”.  This would be where your external is a really big drive and you did it to make things faster.  To install that do the following steps:

  1. apt-get update
  2. apt-get upgrade
  3. apt-get install ntfs-3g
  4. shut down the server
  5. unplug the drive (It isn’t shared yet and you don’t want to wait for the computer to release it)
  6. start the server
  7. and plug in your drive when you have logged back in to the desktop, terminal, and file manager.

No matter what, at this point, you should be able to read your external drive.

You also need information.  When you worked with the server software “samba” you created a user and a password, and you will need that later.
 

Next step – finding where Linux thinks that drive actually is.

Here is where Linux people will be saying “gparted“.  If you know how, go for it, this is the slower but less risky method.

To determine what is plugged into your machine type into the terminal:

  • dmesg | tail -30

Linux keeps a log of whatever is important to the system.  Since you “just” plugged that external drive into the computer, the last thing on that very long stream of text will be what was reported when the computer detected the hardware.  The “tail” bit will tell terminal to just show the last 30 lines of what are in the display of messages (dmesg).

The clue there are the lines that say “usb 1-2” and “sdb”.  When I plugged in the drive, it said “new high-speed USB device number 2”.  So what we’re going to tell the system is that the drive is sitting on a device called sdb.  The partition we will be using will be the first one, so it is officially “/dev/sdb1”.  In windows, it would come up as your D drive if there is no DVD/CD drive present, E Drive if it is, this is the same thing.

Since my stick is formatted to be removable on Windows, it is a format that Linux calls “vfat“.    My big 4 TB drive is formatted NTFS, so I would have to mount it as “ntfs-3g

Create a place to store the data in.  In my case, it is “/home/bill/external“.  You should change “bill” to the name of your user that you logged into when you started this exercise.  To make the directory, open terminal again as a regular user and enter this command:

  • mkdir /home/bill/external
  • chmod 0770 /home/bill/external

You just created the directory and set it up so that you and root can use it.

There is one file that you need to edit in Terminal with the following command:

  • nano /etc/fstab

This file tells linux where all of your disc drives sit, so be careful and don’t delete anything.  You will be adding a line, as below:

  • /dev/sdb1 /home/bill/external vfat defaults 0 0″

That says – put the external drive’s first partition “in” the /home/bill/external directory.  It also says that it is “vfat” format so change that if it is an ntfs-3g format.  The defaults are lengthy and you can go into them in great detail on the Wikipedia Article.

If you wanted to go further and add multiple partitions for other people, you could do it in /etc/fstab by adding multiple entries.

Once you restart the computer, you should be able to find the drive on Windows, and you are on your way.  Just find the drive in Windows File Manager, enter in your login from Linux, and you’re good to go.

One final wrinkle

What this does is to “bind” the external hard drive or memory stick to the server.  It is now set to automatically mount and share the drive whenever the power comes back on.  If you do not have a drive plugged in, Linux will boot, but put you into a terminal session as root into what is called “Single User Mode”.  You can do the following edit at that point with the commands below.

To remove the hard drive so that the server is no longer looking for the drive at boot, in terminal as root:

  • nano /etc/fstab
  • find the line with the external drive and enter a # as the first character in the line
  • save the file and restart the computer

This now turns your server into a machine that only serves the local hard drive.