Log in

No account? Create an account
Agnus Dei's Journal

> recent entries
> calendar
> friends
> profile
> previous 20 entries

Tuesday, April 30th, 2019
10:47 am - How to handle Timezones/Timechange (EDT/EST) crons on Ubuntu
Ubuntu does not honor the CRON_TZ variable the way that RedHat based Linux does.

Therefore you have to do a check for the EDT/EST from for it to work, and if it matches run the command.
You have to put the cron line in twice. Once for EDT and once for EST.

But this way you can control that the cron always runs at the same time regardless of EST or EDT.

/etc/cron.d# more test-cron
# Ubuntu does not recognize CRON_TZ=America/New_York
# Runs at 6am
1 10 * * Mon,Tue,Wed,Thu,Fri root test `TZ=America/New_York date +"\%Z"` = "EDT" && /usr/local/bin/mycommand
1 11 * * Mon,Tue,Wed,Thu,Fri root test `TZ=America/New_York date +"\%Z"` = "EST" && /usr/local/bin/mycommand

(comment on this)

Saturday, March 23rd, 2019
4:36 am - How to get your Vault Progress on a Mac or Linux box running MTG Arena under Wineskins:
I run MTG:Arena on my mac under Wineskins. It's flawless.

Anyhow, since MacOSX is really just BSD with a pretty GUI, I decided to make a simple command to get my Vault progress status.

So here it is:

$ find /Applications/MTGArena.app/ -name output_log* -exec grep vaultProgress\" {} \;  | tail -1
  "vaultProgress": 14.5,

That's it.

Here's another way if you want to parse it with jq, since it's returning in json (although it's json inside of a text blog):

$ find /Applications/MTGArena.app/ -name output_log* -exec grep -A28 "<== PlayerInventory.GetPlayerInventory" {} \; | tail -28 | jq .vaultProgress

(comment on this)

Thursday, January 31st, 2019
1:29 pm - How to disable all the mess on logout of Apple Terminal


The last line of /etc/bashrc on the mac reads:

[ -r "/etc/bashrc_$TERM_PROGRAM" ] && . "/etc/bashrc_$TERM_PROGRAM"

So to disable all the craziness in the Apple Terminal on exit:

sudo mv /etc/bashrc_Apple_Terminal /etc/bashrc_Apple_Terminal-disabled

(comment on this)

Saturday, January 19th, 2019
9:58 am - Managing your Host based Firewall in MacOSX

(1 comment | comment on this)

9:30 am - VPN's make you LESS secure, not more secure.
Am I the only one who thinks it's funny that all these virus scanning/security companies offer VPN services for home users proclaiming to make your network "more secure"?

Let's think about this for a second. Let's say you have a firewall in your home network, and you VPN to some 3rd party outside your network. Your IP is now exposed OUTSIDE YOUR NETWORK with NO FIREWALL to protect you. So if you wrongfully had say sshd running on your box (on mac, it would be simply clicking the "Remote access" checkbox), and let's say you set a simple password for your root account (or did the OS Upgrade that reset the mac root password TO NOTHING), your box would be exposed to the open internet, with no firewall to block port 22 and no password for the root account.

In other words, using a VPN exposes you to an easy hack.

For this reason I always check "netstat -an |egrep tcp.*LISTEN" on my mac to see what ports are listening before using a VPN. Because all those ports are going to be open to the world for hackers.

Something to think about. Don't think VPN's make you more secure. It's actually the opposite.

(comment on this)

Tuesday, October 2nd, 2018
4:12 pm - Fun With MacOSX Screensaver
1- It's October so install a jack-o-lantern screensaver! :)


2- I wrote a script to make my screensaver go off after 45 seconds. (the lowest MacOSX will let you go in the GUI is 1 minute).

user=`/usr/bin/id -u -nr`
sudo -u $user defaults -currentHost write com.apple.screensaver CleanExit -string "YES"
sudo -u $user defaults -currentHost write com.apple.screensaver PrefsVersion -int 100
sudo -u $user defaults -currentHost write com.apple.screensaver showClock -string "NO"
sudo -u $user defaults -currentHost write com.apple.screensaver idleTime -int 45
sudo -u $user defaults -currentHost write com.apple.screensaver tokenRemovalAction -int 0

# sudo -u $user defaults -currentHost write com.apple.screensaver moduleDict -dict moduleName -string "iLifeSlideshows" path -string "/System/Library/Frameworks/ScreenSaver.framework/Resources/iLifeSlideshows.saver" type -int 0
# sudo -u $user defaults -currentHost write com.apple.ScreenSaverPhotoChooser LastViewedPhotoPath -string ""
# sudo -u $user defaults -currentHost write com.apple.ScreenSaverPhotoChooser SelectedFolderPath -string "/Path/To/Pictures/To/Show"
# sudo -u $user defaults -currentHost write com.apple.ScreenSaverPhotoChooser SelectedSource -int 3
# sudo -u $user defaults -currentHost write com.apple.ScreenSaver.iLifeSlideShows styleKey -string "VintagePrints"
sleep 2
sudo killall -hup cfprefsd

(comment on this)

Saturday, August 25th, 2018
1:36 pm - How to play Magic the Gathering with a standard deck of cards

Requirements:  2 standard decks of playing cards.

Instructions:   Separate the 2 standard decks of cards into suits (clubs, hearts, diamonds, and spades).   Now mix all the hearts and spades together into one deck.  Then mix all the diamonds and clubs into the other deck.  Put 2 Jokers into each deck.

Use the following chart to determine what each card is, and use the standard rules of Magic.

(comment on this)

Wednesday, July 4th, 2018
1:15 am - Fun with Die Stats

Is it possible to be in love with a website?  I think I'm in love with this website -> http://rumkin.com/reference/dnd/diestats.php

Try things like:

A) Testing the new DnD advantage/disadvantage system where you roll 2 d20 and keep the higher if you have advantage, or roll 2d20 and keep the lower if you have disadvantage.

* Advantage =  type "2d20D1" (Average = 13.82)

* Neither = type "d20" (Average = 10.5)

* Disadvantage = type "2d20P1" (Average = 7.17)

B) Or test the RATM (roll and take middle) system where you roll 3 d20 and keep the middle value.

* RATM = type "3d20D1P1" (Average = 10.5 again, but with a much tighter standard deviation)

(comment on this)

Tuesday, July 3rd, 2018
11:35 pm - How to move a large mysql database using rsync (optimized for speed)
So I did something pretty cool Monday night (last night).

I have this database that's let say over a TB big. And let's say it normally takes 4-5 hours to copy it from one server to another using rsync.

So I came up with a way to run rsync(s) in parallel and maximize the throughput. It copied the whole thing in 35 minutes. So what used to take 4-5 hours to copy finished in 35 minutes using my new technique.

Step 1) . First command is for everything outside of the "big database" that you want to sync.  In this case I have only 5 running in parallel.  This should finish pretty quickly.

ls -1 /opt/mysql/ | egrep -v "^mybigdatabase$" | xargs -I {} -P 5 -n 1 rsync -rav --progress --inplace --no-whole-file /opt/mysql/{} myhostname.com:/opt/mysql/

Step 2) Now time to move the big database. The second command is for all the tables inside the big database you want to sync.  In this case I have 30 running in parallel (my host has 32 cores):

ls -1 /opt/mysql/mybigdatabase/ | xargs -I {} -P 30 -n 1 rsync -rav --progress --inplace --no-whole-file /opt/mysql/mybigdatabase/{} myhostname.com:/opt/mysql/mybigdatabase/

(comment on this)

Thursday, June 14th, 2018
10:31 am - perl examples

Keep this... like forever....


(comment on this)

Tuesday, May 29th, 2018
10:53 am - Desktop Icon for starting up VPN connection under Ubuntu
$ cat ~/Desktop/startvpn.desktop 
#!/usr/bin/env xdg-open
[Desktop Entry]
Name[en_US]=Start VPN
Name=Start VPN

$ cat /home/ballison/bin/start_vpn.sh
sudo openvpn --script-security 2 --config ~/work/vpn/current/vpn-my_account.ovpn 


Note - Created with: gnome-desktop-item-edit --create-new startvpn.desktop

(comment on this)

Thursday, May 24th, 2018
12:08 pm - Getting your DNS to work under OpenVPN on Ubuntu.


(comment on this)

Tuesday, May 22nd, 2018
3:52 pm - Fun with AWS, JSON and JQ

I have a lambda function that runs every night and makes AMI backups for any instance where we've set the Tag name "Backup" equal to "True".

So if Backup is not defined as a tag, the server gets backed up.  

So I wanted to know all the running instances where Backup was _not_ defined, so I could see what is _not_ being backed up.

Here's the command I came up with:

for i in us-east-1 us-west-2; do  
	aws ec2 describe-instances --filter "Name=instance-state-name,Values=running"  --output json --region $i \
		| jq '.Reservations[].Instances[] | select(contains({Tags: [{Key: "Backup"}, {Value: ""}]}) | not)' \
		| jq -r '.Tags[] | select(.Key=="Name") |.Value'; 
done  |sort 

View/Download: https://github.com/jackal242/brads_scripts/blob/master/aws_no_backup

(comment on this)

Monday, May 7th, 2018
1:03 pm - ssh - Unknown cipher type 'blowfish'

I just upgraded my Instance from Amazon Linux from "2016.03" to "2018.03" and now none of my autossh tunnels are working.

Turns out "blowfish" is no longer an accepable cypher for ssh.  Now it's "blowfish-cbc"

        [root@ip-10-0-0-89 init.d]# ssh -c blowfish 0
       Unknown cipher type 'blowfish'

         [root@ip-10-0-0-89 init.d]# ssh -V      
        OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017

They renamed the cypher to "blowfish-cbc".

         [root@ip-10-0-0-89 init.d]# ssh -c blowfish-cbc 0


(comment on this)

Monday, October 23rd, 2017
11:29 am - Wrote a script for get AWS IAM policies

# Description: Script to list all and resolve all policies associated with a given IAM user


(comment on this)

Monday, September 25th, 2017
12:47 am - bin/fix-strongvpn.sh

brad-allison-mbp:~ brad.allison$ cat bin/fix-strongvpn.sh 


sudo ifconfig en0 down; 

sleep 1; 

sudo route flush ; 

sleep 1; 

sudo ifconfig en0 up

(comment on this)

Friday, August 11th, 2017
2:45 pm - How to replace zgrep with pigz

When you are running zgrep you are basically actually running gzip with the decompression options piped to grep.

So to replace zgrep with pigz (to make it much much much faster), just pigz -dc the file piped to grep (which is basically what zgrep is doing).  Except as you know pigz will automatically thread out for the number of cores in your hosts to make it super fast.

Here's an example from one of my scripts.


  zgrep "$firm" /u/pound.log-20170${i}${y}.gz 


gzip -cdfq -- /u/pound.log-20170${i}${y}.gz |grep "$firm" 


pigz -dc /u/pound.log-20170${i}${y}.gz | grep "$firm" 

(comment on this)

Monday, June 5th, 2017
1:18 pm - How to audit the configuration of the DNS entry for DNS Failover in AWS Route 52.
How to audit the configuration of the DNS entry for DNS Failover in AWS Route 52.

$ aws route53 list-resource-record-sets --hosted-zone-id Z2I27QCOOT2SB2 --query "{ResourceRecordSets:ResourceRecordSets[?Name == 'myhostlala.mydomainfofo.com.'].{HealthCheckId:HealthCheckId}}"
    "ResourceRecordSets": [
            "HealthCheckId": "XXXXXXX-YYYY-4cdd-8400-XXXXXXXXX"
            "HealthCheckId": "XXXXXXX-YYYY-48c2-bb42-XXXXXXXXXX"

$ aws route53 get-health-check --health-check-id XXXXXXX-YYYY-4cdd-8400-XXXXXXXXX  --query HealthCheck.HealthCheckConfig.[{Type:Type},{Port:Port}]
        "Type": "TCP"
        "Port": 443

(comment on this)

Monday, March 20th, 2017
10:35 am - perl split example
Because I can never ever ever remember how to use split correctly in perl:
($count_of_worst_ip,$worst_ip,$culprit) = (split(/ /,$WORST_COUNT_PER_MINUTE_PER_IP ))[0,1,2];

(comment on this)

Tuesday, November 15th, 2016
7:36 pm - cool sed trick
Wrote something cool tonight.

I like this sed trick. I need to use it more. It basically uses the date stamp string to match that string and until the end of file. So you are only grepping from the first instance of that, until the end of file.

And the date string is set to now minus 10 minutes and then I remove the last character. So if it's currently "2016-11-16 00:18" then it looks from "2016-11-16 00:0" down until end of file.

So the window is always going to be the last 10-20 minutes of logs that it looks at.

ERROR_STRING="com.amazonaws.AmazonClientException: Unable to execute HTTP request: Timeout waiting for connection from pool"
DATE_REGEX=$(date "+%Y-%m-%d %H:%M" -d "10 min ago" | sed s'/.$//')  # Example 2016-11-16 00:0
COUNT=$(sed -n "/$DATE_REGEX/,\$p" ~tomcat/logs/mytomcat.log | grep "$ERROR_STRING" | wc -l )

if [ "$COUNT" -gt "0" ]; then

(comment on this)

> previous 20 entries
> top of page