Recently I moved a website that used a ton of legacy php code from the clients production server to a development location. After the move was complete I found that the previous developers had been extremely sloppy. Rather than having a single location/file for DB credentials, they had it in 4 places.
After I figured out where all the locations for DB credentials were I started getting Open_Basedir errors. The original developer had hard coded the web root location hundreds of times in hundreds of files. For just a moment I felt just a bit overwhelmed, then I remembered that I have the terminal to solve problems like this.
After a bit of research I came up with the following command to recursively search through the entire codebase. When an instance the old web root is detected it replaces it with the correct one.
grep -rl [search for string] . | xargs sed -i s@[search for string]@[replace with string]@g
Or another example with actual search/replace strings
grep -rl /var/www/vhosts/example.com/httpdocs . | xargs sed -i s@/var/www/vhosts/example.com/httpdocs@/var/www/vhosts/newdomain.com/subdomain/dev@g
That command breaks down in the following manner.
“grep -rl” searches recursively for the string you specified “/var/www/vhosts/example.com/httpdocs” starting in the current directory “.”, the “-r” option specifies the recursive search, and “-l” specifies that the system should return only the filenames that contain the string.
those results are then piped “| ” into “sed”, the “-i” option specifies that it should make the changes in place. Then the find replace sequence in this case “s@[search for string]@[replace with string]@g”. The “@” signs could be almost any other value, typically they are a “/” but in this case the strings to find and replace both had “/” in each one so it wouldn’t work as the bordering character. So replacing the “/” with “@” helps SED keep on track. It could easily have been a “#” or “$”, just use what you need to depending on your string.
And with that I was home free no more Open_Basedir issues. Thanks to the command line recursive find and replace all those entries didn’t have to be done by hand.
Thanks Linux Shell
I was left with a conundrum, I had needed to add a default index.html file into each subdirectory on my web server to ensure that the appropriate response was given if someone browsed to any directory on the web server. The command I came up with to copy the file into all subdirectories was:
ls -d */ | xargs -n 1 cp -i index.html
Where index.html is the file to be copied. The ls -d */ command gets a list of directories in the current directory that can be piped into xargs to execute the copying command, but it wasn’t as robust as I was hoping. This command doesn’t work with directories that have spaces in the name. And it only copies the file into the immediate subdirectories. So I started playing around with the command and came up with the following modified command that will copy a file into all subdirectories recursively.
ls -R | grep ":" | sed "s/^.://" | sed "s/://" | xargs -n 1 cp -n index.html
Again this command is copying the index.html file, ls -R gets a list of all files and directories recursively from the current folder. grep “:” locates all the directories since they each end with a colon “:”. sed “s/^.://” removes the reference to the current directory “.” in the returned directory list. sed “s/^.://” then cleans off the trailing colons “:” from each directory entry and then the cleaned directories are piped into the xargs command to copy the file into each one.
So give it a go, it could save you a bit of time and hassle now that you can copy a file into all subdirectories.
I had made a partition backup of a machine using Clonezilla and wanted to restore it. The restore was successful but because I had only restored the partitions rather than the full disk Grub was not installed in the MBR. Without Grub in the MBR the system failed to boot.
I mounted the new filesystem to /mnt while still using the live Clonezilla disk that I had used for the backup. Then I chrooted using the following command
then while in the chroot I attempted to reinstall grub, and since this was a Cloudlinux/Centos install I performed
But grub-install complained that it couldn’t find /dev/sda or that /dev/sda was not a valid block device. So then hunting around on the internet for a little bit I came across this article which showed basically how to ensure that your current live
filesystems are accessible inside of your current chroot.
So I ran the following commands outside of the chroot before entering it again.
mount --bind /proc /mnt/proc
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
Then I chrooted to /mnt again and ran my grub-install command and all was well. The machine booted perfectly after that.