Am I the only one who thinks it's funny that all these virus scanning/security companies offer VPN services for home users proclaiming to make your network "more secure"?
Let's think about this for a second. Let's say you have a firewall in your home network, and you VPN to some 3rd party outside your network. Your IP is now exposed OUTSIDE YOUR NETWORK with NO FIREWALL to protect you. So if you wrongfully had say sshd running on your box (on mac, it would be simply clicking the "Remote access" checkbox), and let's say you set a simple password for your root account (or did the OS Upgrade that reset the mac root password TO NOTHING), your box would be exposed to the open internet, with no firewall to block port 22 and no password for the root account.
In other words, using a VPN exposes you to an easy hack.
For this reason I always check "netstat -an |egrep tcp.*LISTEN" on my mac to see what ports are listening before using a VPN. Because all those ports are going to be open to the world for hackers.
Something to think about. Don't think VPN's make you more secure. It's actually the opposite.
Instructions: Separate the 2 standard decks of cards into suits (clubs, hearts, diamonds, and spades). Now mix all the hearts and spades together into one deck. Then mix all the diamonds and clubs into the other deck. Put 2 Jokers into each deck.
Use the following chart to determine what each card is, and use the standard rules of Magic.
So I did something pretty cool Monday night (last night).
I have this database that's let say over a TB big. And let's say it normally takes 4-5 hours to copy it from one server to another using rsync.
So I came up with a way to run rsync(s) in parallel and maximize the throughput. It copied the whole thing in 35 minutes. So what used to take 4-5 hours to copy finished in 35 minutes using my new technique.
Step 1) . First command is for everything outside of the "big database" that you want to sync. In this case I have only 5 running in parallel. This should finish pretty quickly.
When you are running zgrep you are basically actually running gzip with the decompression options piped to grep.
So to replace zgrep with pigz (to make it much much much faster), just pigz -dc the file piped to grep (which is basically what zgrep is doing). Except as you know pigz will automatically thread out for the number of cores in your hosts to make it super fast.
I like this sed trick. I need to use it more. It basically uses the date stamp string to match that string and until the end of file. So you are only grepping from the first instance of that, until the end of file.
And the date string is set to now minus 10 minutes and then I remove the last character. So if it's currently "2016-11-16 00:18" then it looks from "2016-11-16 00:0" down until end of file.
So the window is always going to be the last 10-20 minutes of logs that it looks at.
ERROR_STRING="com.amazonaws.AmazonClientException: Unable to execute HTTP request: Timeout waiting for connection from pool"
DATE_REGEX=$(date "+%Y-%m-%d %H:%M" -d "10 min ago" | sed s'/.$//') # Example 2016-11-16 00:0
COUNT=$(sed -n "/$DATE_REGEX/,\$p" ~tomcat/logs/mytomcat.log | grep "$ERROR_STRING" | wc -l )
if [ "$COUNT" -gt "0" ]; then