Wednesday, December 07, 2011

Cloning an Oracle Schema

An Oracle schema can be "cloned" by using the expdp and impdp utilities.

1) Create a directory on the file system and let Oracle know about it.
a) [bash assumed]: mkdir /home/oracle/data_dmp
b) [in sqlplus]: create directory data_dmp as '/home/oracle/data_dmp';
grant read,write on directory DATA_DMP to myschema;

2) Export schema to file:
expdp myschema/**** dumpfile=myschema.dmp logfile=myschema.exp schemas=myschema directory=data_dmp

3) Create a user for your new schema. I am cloning this schema simply to run a test of some scripts that will modify the schema. I like to call my temporary schemas "delme" so I remember to delete them.

[I created user in SQL Developer and granted all because I'm lazy and want it done fast.]

Crossed out because Step 4 below will actually create the user for you!

4) Import!
impdp myschema/**** directory=data_dmp dumpfile=myschema.dmp logfile=myschema.imp remap_schema='MYSCHEMA':'DELME'

Sunday, November 27, 2011

Selecting random row on various databases

Here's the link, and below is my peanut gallery commentary:
MySQL and Postgres: Simple and straightforward
SQL Server: Simple but odd function name to have to remember to call
DB2: Um, awkward. Can you not just do "order by RAND()" in DB2?
Oracle: So...I HAVE to make a subquery and use rownum? Oy vey. Oracle has a knack for being beyond powerful, but the simplest things can be way too clumsy. Take sqlplus for example.

Tuesday, November 22, 2011

History/Up Arrow in sqlplus

Having used PostgreSQL's psql for a few years, returning to sqlplus feels almost painful. Here's a way to at least add an up-arrow history to sqlplus - http://linuxtechres.blogspot.com/2011/04/how-to-make-sqlplus-more-friendly.html

Unfortunately it requires installing rlwrap from http://utopia.knoware.nl/~hlub/rlwrap/ - but it does work well.

Monday, November 07, 2011

A modest proposal?

From a chat with Buddy.

(2:19:29 PM) Me: sometimes i wish sql, or at least individual RDBMSs, had a built-in way to "disable" a row, simply cause i run into this pattern so much :)

(2:19:46 PM) Buddy: disable a row?

(2:25:04 PM) Me: yeah, here's a sample use case, not necessarily fully thought out :)... I have to add (well I'm debating whether it's worth the effort) the ability to delete a product... but burndown_metric references product, and transaction references burndown_metric...I absolutely can't delete rows in transaction. So I have to add an enabled flag to product, and that flag will have to be checked in almost all queries (either that or i'll make something like product_v that filters out enabled=false)...so what if I could avoid all this and, instead of "DELETE FROM product WHERE ..." I could do "DISABLE FROM product WHERE ..." - which makes it so for all queries, by default, the row is considered deleted and gone just about automatically, but foreign keys aren't broken...and for those occasions where you want to see the disabled data, maybe you could do something special like "SELECT FROM product WHERE ... INCLUDE DISABLED"

(2:31:35 PM) Buddy: ah word, just aboput every single one of my tables I have a deleted flag (I store a datetime rather then boolean, the time it was deleted) so many of my queries have "where deleted is null"

(2:32:09 PM) Me: yeah

(2:32:29 PM) Me: just such a common pattern it seems, and i hate having to go through this time and time again :)

(2:37:08 PM) Me: in reality even if a database added the feature today, most people would be ignorant of it, and when it's used it would baffle everyone...getting results they didn't expect and not knowing about INCLUDE DISABLED

(2:38:44 PM) Me: Table A has a foreign key to Table B...yet you join the two and get no results

Friday, September 23, 2011

Installing Postgres 9.0 on CentOS 5 using yum repository

This was useful to me today - http://anujjaiswal.wordpress.com/2011/06/14/installing-postgres9-0-and-postgis-on-centos/

Not perfect as written. Be sure to read the comments. Also beware of few minor typos, such as "Centos-Base.repo" which should really be "CentOS-Base.repo"

Thursday, August 11, 2011

Another PostgreSQL Diff Tool (apgdiff)

A useful tool for comparing two schemas is Another PostgreSQL Diff Tool (apgdiff).

I use it for upgrade scripts. Does my "upgrade schema version x to y" script produce the same schema as my "create schema version y from scratch" script? Apgdiff helps me pin point any differences and correct them.

Wednesday, August 10, 2011

Postgres query using window functions, recursive query

As Tom Kyte says, "Analytics rock. Analytics roll." Among other things, Oracle's analytic queries allow a row in the resultset to reference other rows in the same resultset. This is tremendously powerful stuff. If you're not familiar with analytics, take a look at some of the magic Tom performs in that thread I just linked. Some simple use cases:
1) Get a cumulative sum - maybe you have a table of individual purchases and want to see how the sum of purchases progresses over time.
2) Perform your typical GROUP BY query using an aggregate function, BUT you don't want to roll up the data. You want to see every row in the table you're querying, with the AVG (or MAX, SUM, whatever) alongside it. See the first example in this Postgres doc.

But did you know Postgres now has Analytics too, as of 8.4? They are called Window Functions in the Postgres world. (I suppose you could call them "queries that use window functions", or even "analytic queries" as Oracle can't force you not to use the phrase.)

I had a requirement. Stripped down to the core, the requirement was to translate this:

ID | expire
---+---------
 7 |  8/20/2011
 6 | 
 5 |  8/15/2011
 4 |  8/15/2011
 3 |
 2 |
 1 |  8/10/2011

To this:

 ID | expire
---+---------
 7 |  8/20/2011
 6 |  8/15/2011
 5 |  8/15/2011
 4 |  8/15/2011
 3 |  8/10/2011
 2 |  8/10/2011
 1 |  8/10/2011

The translation is: If expire is null, look down to the next older expire. If THAT one is null too, look down again, and so on. The solution that worked very well for me was to use Window Functions, plus recursive ("WITH RECURSIVE") queries.

Edit: A helpful commenter provided this simpler solution: select
 id, max(expire) over (order by id rows between unbounded preceding and current row) as expire
 from t 
order by id desc

The windowing clause in this query says: order the rows by id ascending, and for each row, get the max value of expire looking backwards from this row to all previous rows. Pretty simple! The Oracle solution should be the same, btw.

I'll leave my older code and thought processes in this article for posterity.

Step 1 was to use Window Functions to structure a view of the table above, such that the previous and next IDs could be found in each row, almost like a linked list data structure:

ID   prev_ID  next_id  expire
---  -------  -------  --------
 7     null      6     8/20/2011
 6      7        5
 5      6        4     8/15/2011
 4      5        3     8/15/2011
 3      4        2
 2      3        1
 1      2       null   8/10/2011
(In the real world, my IDs aren't always such simple, gap-free numbers, so I cannot use any simple math tricks, such as finding prev_ID by subtracting 1 from ID.)

Step 2 was to use WITH RECURSIVE to traverse from row to row, via the pointers in our linked list, to form the end resultset we're looking for. This entailed another little trick. Coalesce two expires (the current and the parent) to make the non-null values bubble up.

I'll post the SQL when I get to it.
Creating the view, transaction, with window functions:
CREATE VIEW transaction AS
select
id,
lag(id) over (partition by account_id order by id asc) lag_id,
lead(id) over (partition by account_id order by id asc) lead_id,
expire
from trans;


Recursive query against transaction:
WITH RECURSIVE recur(
id,
expire
) AS (
select
id,
expire
from transaction where lag_id is null
union all
select
transaction.id,
coalesce(transaction.expire, recur.expire)
from transaction, recur
where transaction.lag_id=recur.id
)
select * from recur;

I'm really pleased with the advanced querying features Postgres has been putting out. Actually, WITH RECURSIVE is ANSI standard, as it turns out, and you can even find it in SQL Server, DB2 and Oracle 11g (though Oracle has had its own way of performing recursive queries using CONNECT BY, for ages).

Tuesday, July 19, 2011

How to query SQL Server transaction locks

Being somewhat of a SQL Server novice, I get by mostly in the Management Studio. Well, I'm trying to view my database's tables, but the GUI is just kind of hanging. So I thought I would see if any transaction locks might be blocking me. Here's how to query that info:

http://weblogs.sqlteam.com/mladenp/archive/2008/04/29/SQL-Server-2005-Get-full-information-about-transaction-locks.aspx

Now, this query isn't responding either but I thought it might be a good idea to keep track of that regardless.

Wednesday, June 29, 2011

resizing SHMMAX

Anyone who has used Postgres and changed their memory settings to "greater than tiny" has surely run into this problem. (I'm sure this applies to other databases too - I believe I've run into this in Oracle also). The problem is the kernel won't let Postgres take as much shared memory as it is requesting.


postgres@mwrynnix:~$ service postgresql restart
 * Restarting PostgreSQL 9.0 database server                                                                                                                                                                    * The PostgreSQL server failed to start. Please check the log output:
4e0b32b0.2cd4.1(0) 2011-06-29 10:12:01.406 EDT @: FATAL:  could not create shared memory segment: Invalid argument
4e0b32b0.2cd4.2(0) 2011-06-29 10:12:01.406 EDT @: DETAIL:  Failed system call was shmget(key=5432001, size=139411456, 03600).
4e0b32b0.2cd4.3(0) 2011-06-29 10:12:01.406 EDT @: HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter.  You can either reduce the request size or reconfigure the kernel with larger SHMMAX.  To reduce the request size (currently 139411456 bytes), reduce PostgreSQL's shared_buffers parameter (currently 16384) and/or its max_connections parameter (currently 104).
        If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
        The PostgreSQL documentation contains more information about shared memory configuration.
                                                                                                                                                                                                        [fail]

(In case you didn't realize startup didn't work, it tells you [fail] at the end there.)

This page explains it all - http://www.dbatodba.com/db2/how-to-do/how-to-alter-shmall-and-shmmax-kernel-parameters-on-linux

I would just like to add that in addition to changing the conf file (mentioned on that page), you can modify the file /proc/sys/kernel/shmmax to set it to the value you want NOW. You'll still need to change the .conf to "persist" the change when you reboot. Example below - changing 32 MB shmmax to 256MB:

root@mwrynnix:~# cat /proc/sys/kernel/shmmax
33554432
root@mwrynnix:~# echo 268435456 > /proc/sys/kernel/shmmax
root@mwrynnix:~# cat /proc/sys/kernel/shmmax
268435456

And now we start up Postgres...

root@mwrynnix:~# service postgresql start
 * Starting PostgreSQL 9.0 database server                                                                                                                                                              [ OK ]
root@mwrynnix:~#

Thursday, June 16, 2011

More on Pgpool-II: Errors that can be ignored during online recovery

Question: In the pgpool log, why are there some file not found errors like below? (These errors occurred during online recovery.)

FATAL:  the database system is starting up

...
4ded4257.51ef.7(0) 2011-06-06 17:10:51.493 EDT @: LOG:  restored log file "0000000A00000000000000C8" from archive
scp_wrapper: arg1=mwrynnix:/var/lib/postgresql/9.0/main2/archive_log/0000000A00000000000000C9, arg2=pg_xlog/RECOVERYXLOG arg3=
scp: /var/lib/postgresql/9.0/main2/archive_log/0000000A00000000000000C9: No such file or directory
4ded4257.51ef.8(0) 2011-06-06 17:10:51.775 EDT @: LOG:  could not open file "pg_xlog/0000000A00000000000000C9" (log file 0, segment 201): No such file or directory

Answer I found from author of pgpool: I think you can disregard these errors. The files rsync complains did
exist once so it was counted in the list to be copied. While rsync was
doing the job, they are disappeard, probably removed by PostgreSQL.
This could happen if PostgreSQL actively processes transactions, I
think. (http://pgfoundry.org/pipermail/pgpool-general/2011-May/003687.html)

Pgpool-II Online Recovery (continued)

There is a caveat mentioned in the Pgpool-II manual, in the Online Recovery section:

"Note that there is a restriction about online recovery. If pgpool-II itself is installed on multiple hosts, online recovery does not work correctly, because pgpool-II has to stop all clients during the 2nd stage of online recovery. If there are several pgpool hosts, only one will have received the online recovery command and will block connections."

My question to the newsgroup (pgpool-general) was:

What should be the course of action if I run multiple pgpool-II instances, and I need to recover a database? For example, let my environment consist of multiple, redundant application servers, each with its own pgpool. Attached to each pgpool is the same set of Postgres clusters, of course. My guess would be to perform the following steps:

1) Shut down all pgpool/app server pairs except for one (thus, for a small period of time, all connections will go through the one remaning pgpool/app server - this is fine for me)
2) Perform online recovery on the remaining pgpool
3) Start up all the previously-shutdown pgpools/app servers.

I was told this is the right course to take, with one caveat noted. Pgpool keeps a status file indicating the status of each node (attached, detached, ...) This file should either be deleted before starting up the rest of the pgpool instance, OR when starting up, use the -D option (Discard pgpool_status file and do not restore previous status). Starting up pgpool without -D, then running pcp_attach_node is NOT a good solution, as transactions may be written while one or more nodes are considered detached.

Pgpool-II Online Recovery

I've spent quite a bit of time learning about Pgpool-II online recovery, and getting it working. To recover a broken database, run the recovery script which will sync up the database then attach it to pgpool. Unfortunately the examples in the Pgpool-II manual are not complete - they just give you the general idea. So I had to comb the internet to clear up some haziness, and modify examples to make them work for me. Below is what I've got. It's tested and works. Stress tested even - by bombarding pgpool with inserts before during and after recovery - the result is all databases are in sync. Some of this is borrowed code and I've lost who the author is. If it's your code let me know and I'll properly credit you.

###############################################
#copy_base_backup.sh - 1st stage of recovery
#XXX TODO: remove exit 0 - make exit out with error codes when any command fails
#!/bin/sh

PSQL=/usr/bin/psql
SCP=/usr/bin/scp
SSH=/usr/bin/ssh
LOGGER="log /var/log/postgresql/pgpool_copy_base_backup.log"
#"/usr/bin/logger -i -p local0.info -t pgpool"
RSYNC="/usr/bin/rsync --archive --quiet --compress --rsh=$SSH --delete"
BASENAME=`/usr/bin/basename $0`
HOSTNAME=`/bin/hostname`
ID=`/usr/bin/id -un`

# $1 = Database cluster path of a master node.
# $2 = Hostname of a recovery target node.
# $3 = Database cluster path of a recovery target node.

#PG_HOME=/var/lib/postgresql
SRC_ARCHIVE_LOG_DIR=/var/lib/postgresql/9.0/main2/archive_log

SRC_DATA=$1
DST_HOST=$2
DST_DATA=$3

SRC_DBNAME=postgres
SRC_PORT=5431

log()
{
echo $(date +"%D %T") $2 >> $1
}

$LOGGER "Executing $BASENAME as user $ID"

$LOGGER "Executing pg_start_backup"
$PSQL -p $SRC_PORT -d $SRC_DBNAME -c "select pg_start_backup('pgpool-recovery')"

$LOGGER "Creating file recovery.conf"
echo "restore_command = '$SCP $HOSTNAME:$SRC_ARCHIVE_LOG_DIR/%f %p'" > $SRC_DATA/recovery.conf

$LOGGER "Rsyncing directory base"
$RSYNC $SRC_DATA/base/ $DST_HOST:$DST_DATA/base/
$LOGGER "Rsyncing directory global"
$RSYNC $SRC_DATA/global/ $DST_HOST:$DST_DATA/global/
$LOGGER "Rsyncing directory pg_clog"
$RSYNC $SRC_DATA/pg_clog/ $DST_HOST:$DST_DATA/pg_clog/
$LOGGER "Rsyncing directory pg_multixact"
$RSYNC $SRC_DATA/pg_multixact/ $DST_HOST:$DST_DATA/pg_multixact/
$LOGGER "Rsyncing directory pg_subtrans"
$RSYNC $SRC_DATA/pg_subtrans/ $DST_HOST:$DST_DATA/pg_subtrans/
$LOGGER "Rsyncing directory pg_tblspc"
$RSYNC $SRC_DATA/pg_tblspc/ $DST_HOST:$DST_DATA/pg_tblspc/
$LOGGER "Rsyncing directory pg_twophase"
$RSYNC $SRC_DATA/pg_twophase/ $DST_HOST:$DST_DATA/pg_twophase/
$LOGGER "Rsyncing directory pg_xlog"
$RSYNC $SRC_DATA/pg_xlog/ $DST_HOST:$DST_DATA/pg_xlog/
$LOGGER "Rsyncing file recovery.conf (with source deletion)"
$RSYNC --remove-source-files $SRC_DATA/recovery.conf $DST_HOST:$DST_DATA/

$LOGGER "Executing pg_stop_backup"
$PSQL -p $SRC_PORT -d $SRC_DBNAME -c 'select pg_stop_backup()'

exit 0

###################################################
#pgpool_recovery_pitr.sh - 2nd stage of recovery
#XXX TODO: remove exit 0 - make exit out with error codes when any command fails
#SRC_PORT and SRC_DBNAME need to be modified according to your system
#!/bin/sh

DATADIR=$1
PSQL=/usr/bin/psql
LOGGER="log /var/log/postgresql/pgpool_recovery_pitr.log"
BASENAME=`/usr/bin/basename $0`
ID=`/usr/bin/id -un`
SRC_PORT=5431
SRC_DBNAME=postgres

log()
{
echo $(date +"%D %T") $2 >> $1
}

$LOGGER "Executing $BASENAME as user $ID"

$LOGGER "Flushing sequence values to xlog"
# Force to flush current value of sequences to xlog
psql -p $SRC_PORT -t -c 'SELECT datname FROM pg_database WHERE NOT datistemplate AND datallowconn' template1|
while read i
do
if [ "$i" != "" ];then
psql -p $SRC_PORT -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i
fi
done

$LOGGER "Executing pgpool_switch_xlog"
$PSQL -p $SRC_PORT -d $SRC_DBNAME -c "select pgpool_switch_xlog('$DATADIR/archive_log')"

exit 0

###################
#pgpool_remote_start - 3rd stage (note there is no "3rd stage" in pgpool.conf - it seems hardcoded to run this script by name
#XXX TODO: remove exit 0 - make exit out with error codes when any command fails
#DST_DB_START should be modified - it is the command that is run on the recovery node - maybe service postgreql start is fine, maybe you need to run pg_ctl, for example

#!/bin/sh

SSH=/usr/bin/ssh
LOGGER="log /var/log/postgresql/pgpool_remote_start.log"
DST_DB_START="service postgresql start"

BASENAME=`/usr/bin/basename $0`
ID=`/usr/bin/id -un`

DST_HOST=$1
DST_DIR=$2

log()
{
echo $(date +"%D %T") $2 >> $1
}

$LOGGER "Executing $BASENAME as user $ID"
$LOGGER "Starting remote PostgreSQL server"
$SSH -T $DST_HOST '$DST_DB_START' 2>/dev/null 1>/dev/null

exit 0

Friday, May 20, 2011

Classic AskTom Post

Everyone should read this one at least once. Yes even you. The question and his initial answer, at least - AskTom's "perfect example" post

Friday, May 06, 2011

Logging DDL in Oracle

I could have sworn I had drilled into my head (from studying for the Oracle 10g OCA exam) that DDL is audited in the alert log. I had a need for this today but it wasn't showing up.

Turns out, DDL in the alert log is a new feature in 11g, and to enable it you'll want to run:

ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;

For older versions you'll have to audit another way. One way that worked for my purposes today is a simple DDL trigger: http://www.dba-oracle.com/t_ddl_triggers.htm

Tuesday, May 03, 2011

Pgpool-II Load Balancing

I've been trying, so far unsuccessfully, to get load balancing working in a pgpool-II replication environment with two nodes. This is supposed to spread SELECT queries across multiple nodes, according to the nodes' values of backend_weight. Presumably it does so in a round-robin manner, and if the two nodes had the same weight, query 1 would go to one node, query 2 would go to the other node, query 3 would go back to the first node, and so on...However, my queries are all being sent to one node.

This table, taken from the pgpool-II docs, shows whether a SELECT query will replicate, be sent to master only, or be load balanced based on three conditions:
SELECT is inside a transaction block Y Y Y N N N Y N
replicate_select is true Y Y N N Y Y N N
load_balance_mode is true Y N N N Y N Y Y
results(R:replication, M: send only to master, L: load balance) R R M M R R M L

It's absolutely clear that I have replicate_select set to false and load_balance_mode set to true. I seem to be getting a result of "M", which must mean my SELECT is in a transaction block. It doesn't appear to be though, acc. to my pg logs - at least, I don't see any BEGIN before the SELECT. Hmm!

If I fiddle with my two values of backend_weight and restart pgpool, it will switch to the node with the greater weight. Doesn't help me much, but it shows it's doing...something.

Stay tuned...

Friday, April 29, 2011

Tuesday, April 05, 2011

News: MySQL.com Database Compromised By Blind SQL Injection

MySQL.com Database Compromised By Blind SQL Injection

As Tom Kyte puts it: JUST BIND
(Go here and laugh.)

Year 0 in Oracle

While troubleshooting a customer's date-related issue, it turned out he had a year 0000 date in his table. He showed me this query of his data looking good, however two-digit years always smell fishy to me:

SQL> select trunc(min(time_stamp_h), 'MONTH') as t_min from mytable;

T_MIN
---------
01-SEP-00

At any rate the issue was resolved, but I couldn't even figure out how year 0 would have gotten in the database in the first place. Oracle seems to have a validation that does not allow year 0:

SQL> select to_date('9/1/0000', 'mm/dd/yyyy') from dual;
select to_date('9/1/0000', 'mm/dd/yyyy') from dual
*
ERROR at line 1:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0

But...This blog post revealed that there's more to the story - (Short summary: Oracle doesn't always check for year 0. Only sometimes.)

Tuesday, March 15, 2011

New feature in Postgresql 9.0

Finally getting to use Postgres 9.0. This very simple feature alone makes me want to cry tears of joy. :)

GRANT SELECT ON ALL TABLES IN SCHEMA public TO toto;

(You can replace "TABLES" with "FUNCTIONS" or "SEQUENCES", and of course the inverse, REVOKE, is available.)

Monday, March 07, 2011

Brief chat with a buddy about MySQL index size limitation

(2:44:19 PM) Buddy: so many hacks in mysql :) like, you can't use functions as a default value for a column unless that function is called CURRENT_TIMESTAMP which is a single exception
(2:44:59 PM) Me: ug :)
(2:45:59 PM) Buddy: index lengths are a major limit..I've been creating columns with the value of unhex(sha1(value)) and creating my unique indexes on that
(2:46:57 PM) Me: what's that expression?
(2:47:11 PM) Me: i think i have an idea... :)
(2:47:19 PM) Buddy: returns a 20 byte binary sha1 hash
(2:47:25 PM) Me: yeah..bleh :)
(2:48:59 PM) Me: i guess that's just for enforcing the uniqueness? Or would you look up by that too? (thus you have to remember to call unhex(...)) :)
(2:49:33 PM) Buddy: if I wanted to lookup using that index, then it would be lookup too
(2:49:47 PM) Me: yeah
(2:49:59 PM) Buddy: so my queries are like select value from table where valuesha1=unhex(sha1(thingIwant));
(2:50:14 PM) Me: bleh :)
(2:54:09 PM) Buddy: as long as your index is less then 767 bytes yer ok :)
(2:54:32 PM) Me: what are you indexing on that's bigger? you might have told me before...
(2:54:57 PM) Buddy: URLs :)
(2:55:00 PM) Me: ahh

Tuesday, February 22, 2011

:eye roll:

Oracle Universal Installer can't be run in a directory with spaces in the name (Unix), or else it bombs! This is apparently due to lack of quotes in the runInstaller script, where a local variable is being set to pwd.

Friday, February 11, 2011

Get a database working...NOW

Problem: "I need an Oracle 11g database NOW and I think there's one at [hostname spit] but it doesn't work!"

Solution: Well as I didn't know of any others I jumped onto spit. First I checked if an Oracle instance was running:

oracle@spit:~$ ps -ef | grep pmon
oracle 17973 17946 0 10:55 pts/4 00:00:00 grep --color=auto pmon

Apparently not. OK, so are any databases installed?

oracle@spit:~$ tail -n 5 /etc/oratab
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
orcl:/opt/oracle/product/11.2.0/dbhome_1:N
spitfire:/opt/oracle/product/11.2.0/dbhome_1:N

Yes, two SIDs; orcl and spitfire. OK let's see if we can get into sqlplus to get one of them started.

oracle@spit:~$ /opt/oracle/product/11.2.0/dbhome_1/bin/sqlplus
Error 6 initializing SQL*Plus
SP2-0667: Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory

Doh! Apparently there's no $ORACLE_HOME set…Therefore $ORACLE_SID is probably not set as well:

oracle@spit:~$ echo $ORACLE_HOME

oracle@spit:~$ echo $ORACLE_SID

Well, let's set them.
oracle@spit:~$ export ORACLE_HOME=/opt/oracle/product/11.2.0/dbhome_1
oracle@spit:~$ export ORACLE_SID=orcl

Next, I checked whether the listener was running, and when I found it wasn't I started it:

oracle@spit:/opt/oracle/product/11.2.0/dbhome_1$ ./bin/lsnrctl start

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 11-FEB-2011 11:13:03

Copyright (c) 1991, 2009, Oracle. All rights reserved.

Starting /opt/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.1.0 - Production
System parameter file is /opt/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/spot/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=spot)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 11-FEB-2011 11:13:05
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /opt/oracle/diag/tnslsnr/spot/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=spot)(PORT=1521)))
The listener supports no services
The command completed successfully
Now let's start up the instance!

oracle@spit:/opt/oracle/product/11.2.0/dbhome_1$ sqlplus sys/******* as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 11 11:27:30 2011

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1690705920 bytes
Fixed Size 1336960 bytes
Variable Size 1291848064 bytes
Database Buffers 385875968 bytes
Redo Buffers 11644928 bytes
Database mounted.
Database opened.
SQL> quit;
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
As a side note, I had set $ORACLE_SID to the wrong value earlier. It's going to look up the spfile's name by this sid, so if it's wrong the file will not be found (here I had set it to "oracl" instead of "orcl":

SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/opt/oracle/product/11.2.0/dbhome_1/dbs/initoracl.ora'
Let's check out if the instance is picked up by the listener…
oracle@spit:/opt/oracle/product/11.2.0/dbhome_1$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 11-FEB-2011 11:29:14

Copyright (c) 1991, 2009, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 11-FEB-2011 11:13:05
Uptime 0 days 0 hr. 16 min. 9 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /opt/oracle/diag/tnslsnr/spot/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=spot)(PORT=1521)))
Services Summary...
Service "orcl" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
The command completed successfully

And finally let's start up the EM DB Control:

oracle@spit:/opt/oracle/product/11.2.0/dbhome_1$ emctl start dbconsole
ulimit: 25: bad number
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://spot:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ............. started.
(Note that "ulimit 25: bad number" wackiness. Some info on that here: http://johanlouwers.blogspot.com/2010/04/emctl-ulimit-25-bad-number.html)

At any rate, my coworker who needed this database ASAP gave me the thumbs up...w00t! But, will all these components start up automagically upon reboot? Stay tuned...

Wednesday, January 19, 2011

Once again...

Once again I'm thwarted slowed down by a database limitation that, to me, seems silly and arbitrary.

4000 character limit in SSIS

Friday, January 14, 2011

Disabling/enabling indexes (Oracle)

I whipped up the following two procedures for disabling and enabling all indexes on a specified table. They do not handle every kind of index just yet. Technically it is marking indexes as unusable then rebuilding them. Unique/Primary Key indexes are not disabled/enabled in these functions. There may be a few other types of indexes not yet supported.

SQL Developer made the caps wacky - need to find the setting. :)

create or replace procedure DISABLE_INDEXES_ON_TABLE(P_OWNER_NAME in varchar2, P_TABLE_NAME in varchar2) as

DISABLE_STR long;

begin

for REC in (select * from ALL_INDEXES where OWNER=P_OWNER_NAME and TABLE_NAME=P_TABLE_NAME and UNIQUENESS='NONUNIQUE') LOOP

if REC.INDEX_TYPE in ('NORMAL', 'NORMAL/REV', 'FUNCTION-BASED DOMAIN', 'FUNCTION-BASED NORMAL') then

DISABLE_STR := 'ALTER INDEX ' || REC.INDEX_NAME || ' UNUSABLE';

end if;

execute immediate DISABLE_STR;

end loop;

end;





create or replace procedure ENABLE_INDEXES_ON_TABLE(P_OWNER_NAME in varchar2, P_TABLE_NAME in varchar2) as

REBUILD_STR long;

begin

for REC in (select * from ALL_INDEXES where OWNER=P_OWNER_NAME and TABLE_NAME=P_TABLE_NAME and UNIQUENESS='NONUNIQUE') LOOP

if REC.INDEX_TYPE in ('NORMAL', 'NORMAL/REV', 'FUNCTION-BASED DOMAIN', 'FUNCTION-BASED NORMAL') then

REBUILD_STR := 'ALTER INDEX ' || REC.INDEX_NAME || ' REBUILD';

end if;

execute immediate REBUILD_STR;

end LOOP;

end;