Posts

Showing posts from 2011

Net10 is a frustrating company

I'm done with Net10.  I've had a phone with them since June 2011. I originally signed up for the $15/month plan for 200 minutes. It worked great.  I was charged 1 unit/minute of talk and .25 units/text. I was able to calculate everything out and figured out adding a 900 min Pay As You Go would supplement and get me through the year, or so I thought. I was still being charged the standard 1 unit for each minute of talk and .25 units/min for texting. In October I decided to try the $25/month plan for 750 minutes. No rollover with that plan but I figured it would be the perfect plan. Net10, however, is not up front about any of the costs on their plan. Oh sure, they'll respond to an email and tell you the costs are buried in the terms and conditions and indeed they are. However, an honest company would simply put that right on the plan page so you know the change in charges between plans. So I changed and began to be charged 1 unit/text, up from the .25 units/text on the other

Query last 5 installed rpms by date

rpm -qa --queryformat '%{installtime} (%{installtime:date}) %{name}\n' |  sort -n | tail -5 Source: http://www.tummy.com/journals/entries/jafo_20071031_225537

Changing the email address on your Nike + iPod acount

These directions worked on a Mac running 10.5.8. I recently changed the email address on my Nike+ account. Getting to sync properly with iTunes again was a tad challenging. I basically followed the steps here but I did not reset my iPod. I lost a few runs because I didn't do things in the proper order. 1. Uncheck the automatically send workout data to nikeplus.com in the Nike + iPod tab in iTunes and sync your iPod (in iTunes) without new workouts on your iPod. 2. Get a new workout recorded on your iPod. It can simply be walking around for 5 or 10 seconds. 3. Log in to Nike+ using Safari. 4. Connect your iPod with an unsynced run/walk 5. Recheck the automatically send workout data to nikeplus.com in the Nike + iPod in iTunes and click Apply. Your new Login ID is hopefully now showing in iTunes.

Net10 monthly plans

I recently made a post to the Net10 Forums about their monthly plans. After submitting my post it mentioned that a moderator needed to approve my post. So clearly, Net10 does not want open discussion on their boards so I'm reposting my info here, where I moderate what I post :) Beware the 750 minute 30 day plan If all you do is make phone calls, the plan is fine. If you text, the plan is terrible. They don't tell you this up front, although they do kindly bury it in the terms and conditions, you will be charged 1 unit or minute per text message. We have an LG900G phone and were getting charged .25 units/text message. As we started to do more texting I thought moving to the 750 min/month plan would be ideal. It's my bad for not carefully reading the terms and conditions but Net10 should put that kind of info up front to it's perfectly clear. So for now I'll continue ripping through minutes until the minutes are used up and go back to another plan and supplement w

Restore using pg_restore

We run daily dumps on our PostgreSQL databases. The command we use to dump the databases is: pg_dump --blobs --compress=9 --format=c --verbose DBNAME --file=DUMPFILE.c I recently had to restore one of the dumps to a different server and kept receiving the error: ERROR: invalid byte sequence for encoding "UTF8": 0x93 I discovered that my original system had the database encoded as SQL_ASCII and the new system was encoding all new databases as UTF8. So I ran a create database with the proper encoding: CREATE DATABASE newdbname WITH ENCODING 'SQL_ASCII'; Then restored the file using pg_restore: pg_restore --dbname=newdbname DUMPFILE.c

Argument list too long

I've run into this running rm * and du -hcs *. Solution is to pass the command via pipe and use xargs: find . -name 'filename*' | xargs rm

Slow data load speeds with Greenplum

Recently started loading data into a brand new Greenplum DCA. Data load speeds should be blazing fast right? Well, ours were very slow. It  was taking 50 seconds to load a 2.6GB csv file, pitifully slow. I finally figured out what error I had made. We were moving data from one GP system to another. That process involved dumping the schema out of the production system, dumping the data out of the production system, restoring the schema in the new contingency system, then restoring the data into the new contingency system. The problem was that when we restored the schema into the new system, we restored everything, including indexes. So as I was attempting to load the new data, the system was indexing it at the same time. After dropping indexes, that 2.6GB csv file loaded in about 8 seconds. That's more like it.

Summer brew

Never tried this before but a friend told me about it and it sounded good. - 1 large can of frozen lime-aide (empty into a pitcher) - Refill lime-aide can will vodka and then pour into pitcher - Refill (or half way depending on how far you want it to stretch) can one more time with water and pour into pitcher - Add 6 cans of light beer (cheap beer is fine) to the pitcher and then stir - Serve in a glass / cup with crushed ice.

Time sucking queries

Had a user submit a query recently: select count(*) from (select * from viewA UNION select * from viewB) viewA pulls from two tables with a combined 1.5 billion rows and around 60 columns. viewB pulls from a table with about 300 million rows. I rewrote the query as follows: select (select count(*) from viewA) + (select count(*) from viewB) AS count Same result but the first query has an explain plan with a cost 26X the second. Second query runs in about 3 minutes.

Terminal title in OS X

I did not come up with this. A quick google search turns up multiple ideas. I combined a few and trimmed things down to how I wanted the title to look. Here's my final result: case $TERM in   (xterm*)   export PROMPT_COMMAND='echo -ne "\033]0;${USER}@$(hostname -s)\007"'   ;; esac I added that code to ~/.bash_profile. Now my Terminal title changes when I open a Terminal or ssh into a machine then exit back to my Mac.

Distributed by and updates in Greenplum

I have an earlier post where I list how to update table A from table B. In that example the WHERE clause sets tableA.column=tableB.column. That works perfectly as long as your tables are DISTRIBUTED BY(column). If one, or both, are distributed randomly, you are out of luck. At least this applies to 3.3.x. I haven't tried this on a GP 4.x setup. I had tableA distributed randomly and when I tried to redistribute by column I would get gang errors. Here's the fix I implemented. Run a pg_dump --schema-only on tableA. Run ALTER TABLE and rename tableA to tableA_orig. Edit the schema dump of tableA and change distributed by from randomly to (column). Run the schema dump to recreate tableA distributed by column. Populate the new tableA with, INSERT INTO tableA SELECT * FROM tableA_orig. There's a few other things to be aware of. You'll need to redo the indexes on tableA or edit the schema and change the name of any indexes because the original indexes will now be associated w

Database, schema, and table sizes in Greenplum

Starting in the 4.x release, you get size info from the gp_toolkit schema. To get the size of all databases and their size in bytes: select sodddatname, sodddatasize from gp_toolkit.gp_size_database; To see the database size in GB, TB, and/or MB. TB: select sodddatname, (sodddatsize/1073741824)/1024 AS sizeinTB from gp_toolkit.gp_size_of_database; GB: select sodddatname, (sodddatsize/1073741824) AS sizeinGB from gp_toolkit.gp_size_of_database; MB: select sodddatname, (sodddatsize/1048576) AS sizeinMB from gp_toolkit.gp_size_of_database; For schema sizes , connect to your database and run: TB: select sosdnsp, (sosdschematablesize/1073741824)/1024 AS schemasizeinTB from gp_toolkit.gp_size_of_schema_disk; GB: select sosdnsp, (sosdschematablesize/1073741824) AS schemasizeinGB from gp_toolkit.gp_size_of_schema_disk; MB: select sosdnsp, (sosdschematablesize/1048576) AS schemasizeinMB from gp_toolkit.gp_size_of_schema_disk; If you want a specific schema only, add

Unable to log into OS X

Ran into a problem with being unable to log into an OS X machine joined to an AD domain. I had previously logged in successfully with this account. Logged in as a local administrator on the machine and noticed the account was no longer listed in System Preferences/Accounts but other domain accounts were, odd.  Ran dscl . -list users and the account was listed. There was also a directory structure under /Users/userid. Ran dscl . -delete /Users/userid and was then able to successfully log in and the home directory was untouched so no data was lost.

Fixing size bloat in Greenplum tables

I noticed recently that some queries on a table were running very slow. Simple counts were taking longer on what appeared to be smaller tables. I say appeared because the tables had 1/5 the number of rows as other tables but queries were slower. The raw data files contained about 7 GB of data. To see what the Greenplum system had for table size I ran this query: select pg_size_pretty(pg_relation_size('schema.table_name')); The answer was 190GB! Clearly there were problems.  This table does get reloaded every month with new data but I truncate the table before reloading it so you aren't supposed to run into issues. Anyway there turned out to be a couple of solutions. One was to run a vacuum full on the table. After running that the table size was reported as 5.635 MB. I did try a vacuum on the table but it had no impact on size. Another solution is to redistribute the data randomly then redistribute by the table key. ALTER TABLE schema.table_name SET DISTRIBUTED RANDOM

Burn OS X Lion DVD

Here Nuts and bolts: 1. Once you've pulled Lion down from the Mac App Store, right-click on the installer and select the option "Show Package Contents." This is your Mac's way of tearing the wrapping off a virtual install disk to access all the shiny bits (aka "files") inside. 2. Open the "Contents" folder, then look for a "SharedSupport" folder and open that. Inside, you'll find something called "InstallESD.dmg." This is the money file we're looking for (or the "master control program" if you're a Tron wonk). 3. Copy that file ("InstallESD.dmg") to a folder outside of the installer (your desktop works). 4. Open the "Utilities" folder on your Mac and launch "Disk Utility." 5. Select "Burn" from the "Images" menu option, or just click the yellow and black icon on the menu bar (which, disturbingly, looks just like the official symbol for nucle

OS X right side of menu bar frozen

I am having a problem with my Mac where the right side of the menu bar will freeze when I run VMware. The problem is that my clock freezes and I lose track of time. While this isn't a solution for the overall problem, I found a fix that allows me to reset the menu bar and get my clock back on track. Simply run the following command in a Terminal: killall SystemUIServer

Batch module replace in Drupal

Every so often I update a Drupal site and need to update a bunch of modules/themes at the same time. Fortunately, the naming convention of modules/themes is consistent so I can update many with one little script. First, scp all the module/theme gz files to the location you want to store them. Next, cd into that directory. Last, run this script: for thm in $(ls *.gz) do         thmnew=$(ls ${thm}|cut -d'-' -f1)         tar -cf ${thmnew}_old.tar ${thmnew}         mv ${thmnew}_old.tar /usr/local/src/drupal/tmp/themesupgraded/         mv ${thmnew} /usr/local/src/drupal/tmp/themesupgraded/         tar -zxvf ${thm}         mv ${thm} /usr/local/src/drupal/tmp/themesupgraded/ done

Stop OS X bouncing Dock icons

Hallelujah! These drive me nutty some times. Highlights: Open Terminal  defaults write com.apple.dock no-bouncing -bool TRUE  killall Dock To reenable: Open Terminal  defaults write com.apple.dock no-bouncing -bool FALSE  killall Dock Source

Rsync transfer statistics

I wanted to take a look at the total amount of data that would be transferred for an rsync operation. Running  /usr/bin/rsync --dry-run -avhz --delete -e ssh /sourcepath destserver:/destpath would give the the Total Size of the /sourcepath but not the amount transferred. Adding --stats will give you the amount of data that needs to be transferred to get the /sourcepath and /destpath in sync /usr/bin/rsync --dry-run -ahz --stats --delete -e ssh /sourcepath destserver:/destpath

Samsung Intercept on Virgin Mobile

I've been using the Samsung Intercept on Virgin Mobile since Dec. 2010. I thought I'd take a little time and writeup my experience with the phone and the service. This is my first smartphone. I've always been fine using a "feature" phone since my primary use was calls with an occasional text. I was using a Samsung T401g and the Net10 service. It worked great for me. The problem was that I was carrying an iPod touch and the Samsung phone and decided I was tired of carrying two devices in my pockets all the time. I use Google services heavily and thought it would be handy to be able to enter items in my Google calendar as they are setup. Previously I would enter items into my Google Calendar on my computer and then have text messages sent to my phone to alert me. That worked great but I wanted the ability to add them on the phone. That and going down to one device was also appealing. My problem was that I didn't want to pay at least $70/month for service. At

ssh-add could not open a connection

Ever try to run ssh-add and get this message? Could not open a connection to your authentication agent Run this command exec ssh-agent /bin/bash or just add that to your .bash_profile so it runs on login.

CentOS 5.5 fails on install

There's a known bug in CentOS 5.x that causes install to fail if you select the Extras repositories during the setup process. So DO NOT chose the Extras repository when you install.

OS X boots to grey screen with spinning disc

Recent went through a scenario where I was turning a MacBook Pro into a fashionable brick. Install OS, completely update, apply a bunch of new settings & software, reboot, and fu.... Grey screen with spinning disc. I'd diligently boot into my install DVD, repair disk(saw errors about file and directory counts and fix them), repair permission, reboot, and fu... Repeat process. I finally figured out that /System/Library/LaunchDaemons will read every file in that directory upon boot, even if it doesn't end with .plist. I didn't want to fubar any of the files in that directory so I would create a backup copy of the file I was working with and would save them in that directory and end the file with .ORIG Turns out that upon boot both files were bumping into each other and the machine would not boot. Finally moved my backup file into /Users/Shared and the machine boots. So don't keep backup copies of files in /System/Library/LaunchDaemons, store them somewhere else.

Bash substrings

Say you have lines in a file that are of the format: 123:200105 and you need to split out the strings to work with them. string=123:201004 string2=${string:0:3} # result is 123 string3=${string:4:6} # result is 201004

Black Bean & Roasted Red Pepper Dip

Don't know where I found this but it was stored as a random txt file on my computer. Black Bean & Roasted Red Pepper Dip 1 pkg(1 oz.) taco spices & seasonings 1 can(15 oz.) black beans, rinsed and drained 1 jar(7 oz.) roasted red peppers, drained 1 pkg(12 oz.) light cream cheese 1 Tbsp. chopped cilantro 1 to 2 teaspoons lime juice Garnishes: chopped tomato and fresh cilantro In food processor combine all ingredients except garnishes; mix thoroughly. Makes 2 cups.

Drupal site move results in off-line message

I recently moved a Drupal site from test to production. I followed the standard line for moving a site. When I went to access the new site it told me the site was off-line. My big problem was that my log files weren't telling me anything useful, nothing at all. I kept poking around and finally figured out I had made a typo when setting up the database and didn't have the MySQL permissions assigned correctly. As soon as the proper grant was issued on the database the site immediately became available.

Greenplum update with multiple tables

Recently ran into an issue using Greenplum to update a column. I had column1 in tableA that needed to be set to the value of column1 in tableB. Finally came across the correct syntax here . This is what I was looking for: UPDATE tableA  SET col1 = tableB.col1  FROM tableB  WHERE tableA.col2=tableB.col2;