Wednesday, December 19, 2007

Two weeks back i attended this Open Source conference called FOSS.IN. I wanted to attend at the least three days out of four having very interesting talks but could make it only for one and a half day. The crowd was very techy and this conference is a must attend if you have anything to do with opensource in india.

One of the snaps is of a kernel guy who could not stand at one place, so difficult to catch a snap that i have to take a video ;). His talk was really good.

Wednesday, November 14, 2007

Third IEEE International Conference on e-Science and Grid Computing


Third IEEE International Conference on e-Science
and Grid Computing
December 10-13, 2007, Bangalore, India


Sponsored By:
IEEE Computer Society's Technical Committee on Scalable Computing

Organised/Supported by:
Centre for Development of Advanced Computing, India
The University of Melbourne, Australia
Indiana University, USA
LSU Center for Computation & Technology, USA
EuroIndiaGrid Project
OMII (Open Middleware Infrastructure Institute), UK
Microsoft Corporation
Hewlett Packard (HP)
************ *********
********* ********* ********* ********* ********* *********


* Advance Registration Deadline: Nov 7, 2007

------------ ------
The e-Science 2007 conference, sponsored by the IEEE Computer Society's Technical Committee for Scalable Computing (TCSC), is designed to bring together leading international and interdisciplinary research communities, developers, and users of e-Science applications and enabling IT technologies. The conference serves as a forum to present the results of the latest research and product/tool developments, and highlight related activities from around the world.


The conference features plenary keynote speakers drawn from Europe, North America, and Asia

The conference also features technical talks from industries.

Contributed Papers:
------------ -------

The Program Committee has selected 60 top quality research papers out of 206 submissions from all over the world for presentation at the conference.


* OGF (Open Grid Forum) Workshop on eScience Highlights
* Innovative and Collaborative Problem Solving Environment in Distributed Resources
* Scientific Workflows and Business Workflow Standards in e-Science
* International Grid Interoperability and Interoperation Workshop

Posters and Research Demos:
------------ --------- ------

The conference features 21 posters and 5 "live" research demons selected from submissions
from all over the world.


1. Introduction to Globus Toolkit GT4
Presenter: Ravi Madduri, Argonne National Laboratory, USA

2. Market-based Grid Computing and the Gridbus Middleware
Presenter: Rajkumar Buyya, The University of Melbourne, Australia

3. Autonomic Grid Computing
Presenter: Manish Parashar, Rutgers University (USA) Omer Rana, Cardiff University (UK)

4. Applications enablement on Grid Presenters: Mangala and Prahlad Rao, C-DAC, India

The exhibition session will consist of exhibits/presentations from vendor companies and R&D laboratories.

------------ --------- --------- ---
The IEEE Technical Committee on Scalable Computing, The University of Melbourne, and C-DAC sponsored travel support is being offered to students. All eligible research degree students are encouraged to apply to one of the following related scholarships:
1. International Students (TCSC supported):
http://www.ieeetcsc .org/young/ eScience07/ TCSCgrant.html
2. India-based Students (TCSC and Uni. of Melbourne supported):
http://www.ieeetcsc .org/young/ eScience07/TCSC-UnimelbGrant.html
3. C-DAC supported (For Indian students only):
http://www.escience scholarship.asp

------------ --------- ---
The conference registration includes attendance to all e-Science (1) workshops, (2) tutorials, (3) technical sessions, (4) posters and research demo, (5) exhibits and (6) a copy of the conference proceedings published by the IEEE Computer Society.
============ ========= ========= ========= =========

Wednesday, October 31, 2007

JavaOne Conference

JavaOne Conference
Call for Papers is OPEN
Submit your proposal today - Deadline is November 16, 2007

JavaOne, Sun's 2008 Worldwide Developer Conference, is seeking proposals for technical sessions and Birds-of-a-Feather (BOFs) sessions for this year's Conference.

Attracting over 15,000 developers and leaders in the developer community - from industry leaders, to experienced developers to developers starting out - this conference is one that brings together some of the industry's best and brightest.

The JavaOne conference is your opportunity to reach this specialized community by educating and sharing your experience and expertise with the developer community.

Additional information on the program can be found at:

Powered by ScribeFire.

Tuesday, October 09, 2007


Here is a great blog about ZFS from a Mac OS guy.

ZFS rocks

Powered by ScribeFire.

Wednesday, September 26, 2007

No longer a scientist

About five months back i moved to another company called Sun Microsystems. From "Advanced Computing for Human Advancement" to "Network is the Computer".

Powered by ScribeFire.

Tuesday, May 01, 2007

Format of passwd and shadow files

Format of the /etc/passwd file

A non-shadowed /etc/passwd file has the following format:


The user (login) name


The encoded password


Numerical user ID


Numerical default group ID


The user's full name - Actually this field is called the GECOS (General Electric Comprehensive Operating System) field and can store information other than just the full name. The Shadow commands and manual pages refer to this field as the comment field.


User's home directory (Full pathname)


User's login shell (Full Pathname)

For example:
username:Npge08pfz4wuk:503:100:Full Name:/home/username:/bin/sh
Where Np is the salt and ge08pfz4wuk is the encoded password. The encoded salt/password could just as easily have been kbeMVnZM0oL7I and the two are exactly the same password. There are 4096 possible encodings for the same password. (The example password in this case is 'password', a really bad password).

Once the shadow suite is installed, the /etc/passwd file would instead contain:

username:x:503:100:Full Name:/home/username:/bin/sh
The x in the second field in this case is now just a place holder. The format of the /etc/passwd file really didn't change, it just no longer contains the encoded password. This means that any program that reads the /etc/passwd file but does not actually need to verify passwords will still operate correctly.

The passwords are now relocated to the shadow file (usually /etc/shadow file).

Format of the shadow file

The /etc/shadow file contains the following information:


The User Name


The Encoded password


Days since Jan 1, 1970 that password was last changed


Days before password may be changed


Days after which password must be changed


Days before password is to expire that user is warned


Days after password expires that account is disabled


Days since Jan 1, 1970 that account is disabled


A reserved field

The previous example might then be:

Configuring Quota on Linux

Configuration of disk usage quotas on Linux - Perform the following as root:

  1. Edit file /etc/fstab to add qualifier "usrquota" or "grpquota" to the partition. The following file system mounting options can be specified in /etc/fstab: grpquota, noquota, quota and usrquota. (These options are also accepted by the mount command but ignored.) The filesystem when mounted will show up in the file /etc/mtab, the list of all currently mounted filesystems.)

    • To enable user quota support on a file system, add "usrquota" to the fourth field containing the word "defaults".
      /dev/hda2 /home ext3 defaults,usrquota 1 1
    • Replace "usrquota" with "grpquota", should you need group quota support on a file system.
      /dev/hda2 /home ext3 defaults,grpquota 1 1
    • Need both user quota and group quota support on a file system?
      /dev/hda2 /home ext3 defaults,usrquota,grpquota 1 1
      This enables user and group quotas support on the /home file system.

  2. touch /partition/aquota.user
    where the partition might be /home or some partition defined in /etc/fstab.
    chmod 600 /partition/aquota.user

    The file should be owned by root. Quotas may also be set for groups by using the file

    Quota file names:

    • Quota Version 2 (Linux 2.4/2.6 kernel: Red Hat 7.1+/8/9,FC 1-3): aquota.user,
    • Quota Version 1 (Linux 2.2 kernel: Red Hat 6, 7.0): quota.user,
    The files can be converted/upgraded using the convertquota command.
  3. Re-boot or re-mount file partition with quotas.
    • Re-boot: shutdown -r now
    • Re-mount partition: mount -o remount /partition

    After re-booting or re-mounting the file system, the partition will show up in the list of mounted filesystems as having quotas. Check /etc/mtab:
    /dev/hda5 / ext3 rw,usrquota 0 0

  4. quotacheck -vgum /partition
    quotacheck -vguma
    • For example (Linux kernel 2.4+: Red Hat 7.1+, Fedora): quotacheck -vguma
      quotacheck: WARNING -  Quotafile //aquota.user was probably truncated. ...
      quotacheck: Scanning /dev/hda5 [/] done
      quotacheck: Checked 9998 directories and 179487 files

    • For example (Linux kernel 2.2: Red Hat 6/7.0): quotacheck -v /dev/hda6
      System response:
            Scanning /dev/hda6 [/home] done
      Checked 444 directories and 3136 files
      Using quotafile /home/quota.user

    Quotacheck is used to scan a file system for disk usages, and updates the quota record file "quota.user/aquota.user" to the most recent state. It is recommended thet quotacheck be run at bootup (part of Redhat default installation)

    Man page: quotacheck - scan a filesystem for disk usage, create, check and repair quota files

  5. quotaon -av
    System Response: /dev/hda6: user quotas turned on

    quotaon - enable disk quotas on a file system.
    quotaoff - turn off disk quotas for a file system.

    Man page: quotaon - turn filesystem quotas on and off

  6. edquota -u user_id
    Edit directly using vi editor commands. (See below for more info.)
    For example: edquota -u user1
    • System Response (RH 7+):
      Disk quotas for user user1 (uid 501):
      Filesystem blocks soft hard inodes soft hard
      /dev/hda5 1944 0 0 120 0 0
      • blocks: 1k blocks
      • inodes: Number of entries in directory file
      • soft: Max number of blocks/inodes user may have on partition before warning is issued and grace persiod countdown begins.
        If set to "0" (zero) then no limit is enforced.
      • hard: Max number of blocks/inodes user may have on partition.
        If set to "0" (zero) then no limit is enforced.

    • System Response (RH 6):
                 Quotas for user user1:
      /dev/sdb6: blocks in use: 56, limits (soft = 0, hard = 0)
      inodes in use: 50, limits (soft = 0, hard = 0)
      Something failed if you get the response:
                 /dev/sdb6: blocks in use: 0, limits (soft = 0, hard = 0)
      inodes in use: 0, limits (soft = 0, hard = 0)

      Edit limits:
                 Quotas for user user1:
      /dev/hda6: blocks in use: 992, limits (soft = 50000, hard = 55000)
      inodes in use: 71, limits (soft = 10000, hard = 11000)

    If editing group quotas: edquota -g group_name

    Man page: edquota - edit user quotas

  7. List quotas:
    quota -u user_id

    For example: quota -u user1
    System response:

    Disk quotas for user user1 (uid 501):
    Filesystem blocks quota limit grace files quota limit grace
    /dev/hda6 992 50000 55000 71 10000 11000
    If this does not respond similar to the above, then restart the computer: shutdown -r now

    Man page: quota - display disk usage and limits

Quota Reports
  • Report on all users over quota limits: quota -q
  • Quota summary report: repquota -a
    *** Report for user quotas on device /dev/hda5
    Block grace time: 7days; Inode grace time: 7days
    Block limits File limits
    User used soft hard grace used soft hard grace
    root -- 4335200 0 0 181502 0 0
    bin -- 15644 0 0 101 0 0
    user1 -- 1944 0 0 120 0 0
    No limits shown with this user as limits are set to 0.

    Man page: repquota - summarize quotas for a filesystem.

Quotacheck should scan the file system via cronjob periodically (say, every week?). Add a script to the /etc/cron.weekly/ directory.
File: /etc/cron.weekly/runQuotacheck
  • Linux Kernel 2.4: Red Hat 7.1 - Fedora Core 3:
    /sbin/quotacheck -vguma
  • Linux Kernel 2.2: Red Hat 6/7.0:
    /sbin/quotacheck -v -a

(Remember to chmod +x /etc/cron.weekly/runQuotacheck)

EdQuota Notes:

The "edquota" command puts you into a "vi" editing mode so knowledge of the "vi" editor is necessary. Another editor may be specified with the EDITOR environment variable. You are NOT editing the quota.user file directly. The /partition/quota.user or file is a binary file which you do not edit directly. The command edquota gives you an ascii interface with the text prepared for you. When you ":wq" to save the file from the vi session, it is converted to binary by the edquota command and stored in the quota.user file.

Assigning quota for a bunch of users with the same value. To rapidly set quotas for all users, on my system to the same value as user user1, I would first edit user user1's quota information by hand, then execute:

  edquota -p user1 `awk -F: '$3 > 499 {print $1}' /etc/passwd`

This assumes that the user uid's start from 500 and increment upwards. "blocks in use" is the total number of blocks (in kilobytes) a user has comsumed on a partition. "inodes in use" is the total number of files a user has on a partition.

edquota options:

Edit quotas on remote server using RPC. Remote server must be configured with the daemon rpc.rquotad
-uEdit user quota
-gEdit group quota
-p user-idDuplicate the quotas based on existing prototype user
-F format
-F vfsold
-F vfsv0
-F rpc
-F xfs
vfsold - version 1
vfsv0 - version 2
rpc - quotas over NFS
xfs - quotas for XFS filesystem
-f /file-systemPerform on specified filesystem. Default is to apply on all filesystems with quotas
-tEdit the soft time limits for each filesystem.
-TEdit time for user/group when softlimit is enforced. Specify number and unit or "unset"

Soft Limit and Hard Limits:

Soft limit indicates the maximum amount of disk usage a quota user has on a partition. When combined with "grace period", it acts as the border line, which a quota user is issued warnings about his impending quota violation when passed. Hard limit works only when "grace period" is set. It specifies the absolute limit on the disk usage, which a quota user can't go beyond his "hard limit".

Grace Period:

"Grace Period" is configured with the command "edquota -t", "grace period" is a time limit before the "soft limit" is enforced for a file system with quota enabled. Time units of sec(onds), min(utes), hour(s), day(s), week(s), and month(s) can be used. This is what you'll see with the command "edquota -t":

System response:

  • Linux Kernel 2.4+: Red Hat 7.1+/Fedora:
    Grace period before enforcing soft limits for users:
    Time units may be: days, hours, minutes, or seconds
    Filesystem Block grace period Inode grace period
    /dev/hda5 7days 7days
  • Linux Kernel 2.2: Red Hat 6/7.0:
    Time units may be: days, hours, minutes, or seconds
    Grace period before enforcing soft limits for users:
    /dev/hda2: block grace period: 0 days, file grace period: 0 days

Change the 0 days part to any length of time you feel reasonable. A good choice might be 7 days (or 1 week).

Quota files: (non-XFS file systems)

The edquota command will create/edit the quota file at the root of the file system. (See /etc/mtab for the list of the currently mounted filesystems.)
  • Version 2: aquota.user,
  • Version 1: quota.user,

Self Signed SSL certificates

Use self-signed certificates to test single systems, such as a test web server. Self-signed certificates become impractical in any other case. A local CA, while more complex to setup, reduces the number of keys that need to be distributed for verification, and properly replicates a real world certificate environment.

Creation of certificates requires the openssl utility. This command should be part of an OpenSSL installation, though may be installed out of the standard search path in /usr/local/ssl/bin or elsewhere.

$ which openssl

  1. Generate the Rivest, Shamir and Adleman (RSA) key
  2. OpenSSL can generate a Digital Signature Algorithm (DSA) key (with the gendsa option), though for compatibility RSA keys are most frequently used. Learn more about the genrsa option to openssl.

    $ openssl genrsa 1024 > host.key
    $ chmod 400 host.key

    Modern systems should provide a random device and sufficient entropy for key generation. The data in the host.key file must be protected, as anyone with this information can decrypt traffic encrypted with this key.

  3. Create the Certificate
  4. Learn more about the req option to openssl. The -new, -x509 and -nodes arguments are required to create an unencrypted certificate. The -days argument specifies how long the certificate will be valid for.

    $ openssl req -new -x509 -nodes -sha1 -days 365 -key host.key > host.cert

    Questions may be asked to fill out the certificate’s x509 attributes. The answers should be adjusted for the locale:

    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:Washington
    Locality Name (eg, city) []:Seattle
    Organization Name (eg, company) [Internet Widgits Pty Ltd]
    Organizational Unit Name (eg, section) []:
    Common Name (eg, YOUR name) []
    Email Address []

    The Common Name field usually must exactly match the hostname of the system the certificate will be used on; otherwise, clients should complain about a certificate to hostname mismatch.

    The certificate data in the host.cert file does not need to be protected like the private key file does. In fact, it will likely need to be transferred to all the client systems that need to verify the key of the server being connected to. If this is the case, setup a CA, and distribute the signing certificate to the clients instead of each self-signed certificate.

  5. Extract Metadata (Optional)
  6. Optionally, various certificate metadata can be saved for quick reference, for example to verify the key fingerprint. Learn more about the x509 option to openssl.

    $ openssl x509 -noout -fingerprint -text <>

  7. Combine Key and Certificate Data (Optional)
  8. Some applications may require that the key and certificate data be in a single file. I recommend keeping the key and certificate data separate if possible, as the key data needs to be protected, and the certificate data available to all. Combining the data means the resulting file must be protected like a key file.

    $ cat host.cert host.key > host.pem \
    && rm host.key

    $ chmod 400 host.pem

The host.cert certificate data will need to be exported to client systems for use in testing.

The openssl.cnf file

Localize the system openssl.cnf to include relevant X509 attributes of the certificate. This will save typing and avoid errors when creating certificates. The location of this file varies by system.

$ grep Name_default /etc/ssl/openssl.cnf
countryName_default = US
stateOrProvinceName_default = Washington
0.organizationName_default =
#1.organizationName_default = World Wide Web Pty Ltd
#organizationalUnitName_default =

Friday, April 27, 2007

UW to Dovecot migration

Configuration primer for a Migration from UW
IMAP with pine, Thunderbird and squirrelmail as client. For compatibility
the mbox format is used.

A description for pine with imap acces without entering any password is
also discussed. Please use dovecot 1.0beta1 or later for correct handling
with pine.

Short overview of Mail folders:
| Used | elm | pine | Thunderbird | squirrelmail | UW imapd | dovecot |
Base directoy | ~/Mail | ~/Mail | ~/mail | as configured | mail | | |
Sent Folder | ~/Mail/sent | ~/Mail/sent | ~/mail/sent-mail | Sent | Sent | | |
Trash Folder | ~/Mail/Trash | | - | Trash | Trash | | |
Drafts Folder | ~/Mail/Drafts | | saved-messages | Drafts | Drafts | | |
Templates | ~/Mail/Templates | | | Templates | | | |
Unsent Folder | wie Thunderbird | | | Local Folders/Unsent Messages | | | |
Postponed | | | postponed-msgs | | | | |
Canceled Mail | | ~/Canceled.mail | ~/dead.letter | | | | |
Personal namespace | | | | | | | |
Public namespace | | | | | | #news | |
Other Users | | | | | | | |

pine setup:
Just add the following to Server:

Nickname : Mail
Server :
Path : Mail/
View :

Nickname : Mail
Server : localhost/notls
Path : Mail/
View :

In ~/.pinerc modify the following configuration parameters:
rsh-command=/usr/sbin/dovecot --exec-mail imap
# For large Mailboxes

For details have a look at:

Migration from UW Imapd to dovecot:
Disable UW Imapd in xinetd

Migrate Mailboxes:
cd $USER
cp .mailboxlist .subscriptions

dovecot configuration:
Config (/etc/dovecot.conf):
UW Imapd compatible
protocols = imaps

default_mail_env = mbox:~:INBOX=/var/mail/%u

mail_full_filesystem_access = yes

mbox_read_locks = fcntl
mbox_write_locks = fcntl


Self signed Certificate for SSL:
cd /etc/pki/dovecot/private
openssl genrsa -out dovecot.pem 2048
openssl req -new -x509 -nodes -sha1 -days 3650 -key dovecot.pem >../dovecot.pem
Enter the data for the certificate

dovecot debugging:
Config (/etc/dovecot.conf):
mail_executable = /usr/libexec/dovecot/rawlog /usr/libexec/dovecot/imap
Directory ~/dovecot.rawlog must exist and the input/output will be logged

For ethereal debugging use the following dovecot configuration:
protocols = imap imaps

disable_plaintext_auth = no

maildir/mbox documentation:

$use_imap_tls = true;
$imapPort = 993;
$imap_server_type ='dovecot';
$optional_delimiter = 'detect';
$force_username_lowercase = true;

$default_folder_prefix = '~/Mail/';
$sent_folder = 'sent';
$show_prefix_option = false;
$show_contain_subfolders_option = false;

Thunderbird Plugins
With imap Folder the Xpunge plugin is very usefull to have consitent


Securing Apache through SSL

SSL Configuration

The previous sections introduced the (not-so-basic) concepts behind SSL and you have learned how to generate keys and certificates. Now, finally, you can configure Apache to support SSL. mod_ssl must either be compiled statically or, if you have compiled as a loadable module, the appropriate LoadModule directive must be present in the file.

If you compiled Apache yourself, a new Apache configuration file, named ssl.conf, should be present in the conf/ directory. That file contains a sample Apache SSL configuration and is referenced from the main httpd.conf file via an Include directive.

If you want to start your configuration from scratch, you can add the following configuration snippet to your Apache configuration file:

Listen 80
Listen 443

SSLEngine on
SSLCertificateFile \
SSLCertificateKeyFile \

With the previous configuration, you set up a new virtual host that will listen to port 443 (the default port for HTTPS) and you enable SSL on that virtual host with the SSLEngine directive.

You need to indicate where to find the server's certificate and the file containing the associated key. You do so by using SSLCertificateFile and SSLCertificateKeyfile directives.

Starting the Server

Now you can stop the server if it is running, and start it again. If your key is protected by a pass phrase, you will be prompted for it. After this, Apache will start and you should be able to connect securely to it via the https:// URL.

If you compiled and installed Apache yourself, in many of the vendor configuration files, you can see that the SSL directives are surrounded by an block. That allows for conditional starting of the server in SSL mode. If you start the httpd server binary directly, you can pass it the -DSSL flag at startup. You can also use the apachectl script by issuing the apachectl startssl command. Finally, if you always want to start Apache with SSL support, you can just remove the section and start Apache in the usual way.

If you are unable to successfully start your server, check the Apache error log for clues about what might have gone wrong. For example, if you cannot bind to the port, make sure that another Apache is not running already. You must have administrator privileges to bind to port 443; otherwise, you can change the port to 8443 and access the URL via https://

Configuration Directives

mod_ssl provides comprehensive technical reference documentation. This information will not be reproduced here; rather, I will explain what is possible and which configuration directives you need to use. You can then refer to the online SSL documentation bundled with Apache for the specific syntax or options.


You can control which ciphers and protocols are used via the SSLCipherSuite and SSLProtocol commands. For example, you can configure the server to use only strong encryption with the following configuration:

SSLProtocol all

See the Apache documentation for a detailed description of all available ciphers and protocols.

Client Certificates

Similarly to how clients can verify the identity of servers using server certificates, servers can verify the identity of clients by requiring a client certificate and making sure that it is valid.

SSLCACertificateFile and SSLCACertificatePath are two Apache directives used to specify trusted Certificate Authorities. Only clients presenting certificates signed by these CAs will be allowed access to the server.

The SSLCACertificateFile directive takes a file containing a list of CAs as an argument. Alternatively, you could use the SSLCACertificatePath directive to specify a directory containing trusted CA files. Those files must have a specific format, described in the documentation. SSLVerifyClient enables or disables client certificate verification. SSLVerifyDepth controls the number of delegation levels allowed for a client certificate. The SSLCARevocationFile and SSLCARevocationPath directives enable you to specify certificate revocation lists to invalidate certificates.


SSL is a protocol that requires intensive calculations. mod_ssl and OpenSSL allow several ways to speed up the protocol by caching some of the information about the connection. You can cache certain settings using the SSLSessionCache and SSLSessionCacheTimeout directives. There is also built-in support for specialized cryptographic hardware that will perform the CPU-intensive computations and offload the main processor. The SSLMutex directive enables you to control the internal locking mechanism of the SSL engine. The SSLRandomSeed directive enables you to specify the mechanism to seed the random-number generator required for certain operations. The settings of both directives can have an impact on performance.


mod_ssl hooks into Apache's logging system and provides support for logging any SSL-related aspect of the request, ranging from the protocol used to the information contained in specific elements of a client certificate. This information can also be passed to CGI scripts via environment variables by using the StdEnvVars argument to the Options directive. You can get a listing of the available SSL variables at

The SSLOptions Directive

Many of these options can be applied in a per-directory or per-location basis. The SSL parameters might be renegotiated for those URLs. This can be controlled via the SSLOptions directive.

The SSLPassPhraseDialog directive can be used to avoid having to enter a pass phrase at startup by designating an external program that will be invoked to provide it.

Access Control

The SSLRequireSSL directive enables you to force clients to access the server using SSL. The SSLRequire directive enables you to specify a set of rules that have to be met before the client is allowed access. SSLRequire syntax can be very complex, but itallows an incredible amount of flexibility. The example shows a sample configuration from the mod_ssl documentation that restricts access based on the client certificate and the network the request came from. Access will be granted if one of the following is met:

  • The SSL connection does not use an export (weak) cipher or a NULL cipher, the certificate has been issued by a particular CA and for a particular group, and the access takes place during workdays (Monday to Friday) and working hours (8:00 a.m. to 8:00 p.m.).

  • The client comes from an internal, trusted network.

You can check the documentation for SSLRequire for a complete syntax reference.

SSLRequire Example

SSLRequire (  %{SSL_CIPHER} !~ m/^(EXP|NULL)-/ \
and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \
or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/

Reverse Proxy with SSL

Although at the time this book was written the SSL reverse proxy functionality was not included in mod_ssl for Apache 2.0, it is likely to be included in the future. That functionality enables you to encrypt the reverse proxy connection to backend servers and to perform client and server certificate authentication on that connection. The related directives are SSLProxyMachineCertificatePath, SSLProxyMachineCertificateFile, SSLProxyVerify, SSLProxyVerifyDepth, SSLProxyCACertificatePath, SSLProxyEngine, and SSLProxyCACertificateFile. Their syntax is similar to their regular counterparts.

Monday, April 16, 2007

BBCP another High Bandwith File Transfer Utility

BBCP is a file transfer utility currently in alpha used mainly for transferring files (Huge Files) through high Bandwidth Links.


To transfer the local file /local/path/largefile.tar to the remote system remotesystem as /remote/path/largefile.tar:

bbcp -P 2 -V -w 8m -s 16 /local/path/largefile.tar remotesystem:/remote/path/largefile.tar
“-P 2” , produces progress messages every 2 seconds.
“-V” , produces verbose output, including detailed transfer speed statistics.
“-w 8m” , sets the size of the disk I/O buffers.
“-s 16” , sets the number of parallel network streams to 16.

bbcp assumes the remote system’s non-interactive environment contains the path to the bbcp utility. This can be determined by with the following command:

ssh remotesystem which bbcp

If this is not the case the “-T” bbcp option can be used to specify how to start bbcp on the remote system. For example:

bbcp -P 2 -V -w 8m -s 16 -T 'ssh -x -a -oFallBackToRsh=no %I -l %U %H /remote/path/to/bbcp' /local/path/largefile.tar  remotesystem:/remote/path/largefile.tar

Often during large transfers the connection between the transfering systems is lost. The “-a” options gives bbcp the ability to pick up where it left off. For example:

bbcp -k -a /remotesystem/homedir/.bbcp/ -P 2 -V -w 8m -s 16 /local/path/largefile.tar remotesystem:/remote/path/largefile.tar

To transfer an entire directory tree,

bbcp -r -P 2 -V -w 8m -s 16 /local/path/* remotesystem:/remote/path

When transferring files to the Cray XT3 (jaguar) at NCCS, it is necessary to specify a particular jaguar node as the destination host because the hostname actually points to a server load balancing device which returns node addresses in a round robin fashion. For example:

bbcp -r -P 2 -V -w 8m -s 16 /local/path/*


More information on bbcp can be found by typing “bbcp -h”

CP with same privileges

How to copy with same privileges

cp /path/to/location/. . -prv

Howto on AutoSetOwner in RT3

This custom action sets owner of the ticket to the current user if nobody yet owns the ticket. You can use this scrip action with any condition you want, for eg On Resolve.

Description: AutoSetOwner

Condition: On Resolve

Action: User Defined

Custom action preparation code:

 return 1;

Custom action cleanup code:

 # get actor ID
my $Actor = $self->TransactionObj->Creator;
 # if actor is RT_SystemUser then get out of here
return 1 if $Actor == $RT::SystemUser->id;
 # get out unless ticket owner is nobody
return 1 unless $self->TicketObj->Owner == $RT::Nobody->id;
 # ok, try to change owner
$RT::Logger->info("Auto assign ticket #". $self->TicketObj->id ." to user #". $Actor );
my ($status, $msg) = $self->TicketObj->SetOwner( $Actor );
unless( $status ) {
$RT::Logger->error( "Impossible to assign the ticket to $Actor: $msg" );
return undef;
return 1;

Template: Global template: Blank


This is a variation on AutoSetOwner , it auto-sets the owner of a ticket only if the person doing the correspondence is in the AdminCc watchers:

Condition: On correspond

Action: User Defined

Template: blank

## based on
## And testcode ~ line 576 of (rt3.4.2)
my $Actor = $self->TransactionObj->Creator;
my $Queue = $self->TicketObj->QueueObj;
# if actor is RT_SystemUser then get out of here
return 1 if $Actor == $RT::SystemUser->id;
# get out unless ticket owner is nobody
return 1 unless $self->TicketObj->Owner == $RT::Nobody->id;
# get out unless $Actor is not part of AdminCc watchers
return 1 unless $Queue->IsWatcher(Type => 'AdminCc', PrincipalId => $Actor);
# do the actual 'status update'
my ($status, $msg) = $self->TicketObj->SetOwner( $Actor );
unless( $status ) {
$RT::Logger->warning( "can't set ticket owner to $Actor: $msg" );
return undef;
return 1;

HowTo on repairing MySQL tables

How to Repair Tables

The discussion in this section describes how to use myisamchk on MyISAM tables (extensions .MYI and .MYD).

You can also (and should, if possible) use the CHECK TABLE and REPAIR TABLE statements to check and repair MyISAM tables.

Symptoms of corrupted tables include queries that abort unexpectedly and observable errors such as these:

   *      tbl_name.frm is locked against change
* Can't find file tbl_name.MYI (Errcode: nnn)
* Unexpected end of file
* Record file is crashed
* Got error nnn from table handler

To get more information about the error, run perror nnn, where nnn is the error number. The following example shows how to use perror to find the meanings for the most common error numbers that indicate a problem with a table:

 shell> perror 126 127 132 134 135 136 141 144 145
126 = Index file is crashed / Wrong file format
127 = Record-file is crashed
132 = Old database file
134 = Record was already deleted (or record file crashed)
135 = No more room in record file
136 = No more room in index file
141 = Duplicate unique key or constraint on write or update
144 = Table is crashed and last repair failed
145 = Table was marked as crashed and should be repaired

Note that error 135 (no more room in record file) and error 136 (no more room in index file) are not errors that can be fixed by a simple repair. In this case, you must use ALTER TABLE to increase the MAX_ROWS and AVG_ROW_LENGTH table option values:


If you do not know the current table option values, use SHOW CREATE TABLE.

For the other errors, you must repair your tables. myisamchk can usually detect and fix most problems that occur.

The repair process involves up to four stages, described here. Before you begin, you should change location to the database directory and check the permissions of the table files. On Unix, make sure that they are readable by the user that mysqld runs as (and to you, because you need to access the files you are checking). If it turns out you need to modify files, they must also be writable by you.

This section is for the cases where a table check fails, or you want to use the extended features that myisamchk provides.

If you are going to repair a table from the command line, you must first stop the mysqld server. Note that when you do mysqladmin shutdown on a remote server, the mysqld server is still alive for a while after mysqladmin returns, until all statement-processing has stopped and all index changes have been flushed to disk.

Stage 1: Checking your tables

Run myisamchk *.MYI or myisamchk -e *.MYI if you have more time. Use the -s (silent) option to suppress unnecessary information.

If the mysqld server is stopped, you should use the --update-state option to tell myisamchk to mark the table as “checked.”

You have to repair only those tables for which myisamchk announces an error. For such tables, proceed to Stage 2.

If you get unexpected errors when checking (such as out of memory errors), or if myisamchk crashes, go to Stage 3.

Stage 2: Easy safe repair

First, try myisamchk -r -q tbl_name (-r -q means “quick recovery mode”). This attempts to repair the index file without touching the data file. If the data file contains everything that it should and the delete links point at the correct locations within the data file, this should work, and the table is fixed. Start repairing the next table. Otherwise, use the following procedure:

  1. Make a backup of the data file before continuing.
  2. Use myisamchk -r tbl_name (-r means “recovery mode”). This removes incorrect rows and deleted rows from the data file and reconstructs the index file.
  3. If the preceding step fails, use myisamchk --safe-recover tbl_name. Safe recovery mode uses an old recovery method that handles a few cases that regular recovery mode does not (but is slower).

Note: If you want a repair operation to go much faster, you should set the values of the sort_buffer_size and key_buffer_size variables each to about 25% of your available memory when running myisamchk.

If you get unexpected errors when repairing (such as out of memory errors), or if myisamchk crashes, go to Stage 3.

Stage 3: Difficult repair

You should reach this stage only if the first 16KB block in the index file is destroyed or contains incorrect information, or if the index file is missing. In this case, it is necessary to create a new index file. Do so as follows:

  1. Move the data file to a safe place.

2. Use the table description file to create new (empty) data and index files:
     shell> mysql db_name
mysql> TRUNCATE TABLE tbl_name;
mysql> quit
  3. Copy the old data file back onto the newly created data file. (Do not just move the old file back onto the new file. You want to retain a copy in case something goes wrong.)

Go back to Stage 2. myisamchk -r -q should work. (This should not be an endless loop.)

You can also use the REPAIR TABLE tbl_name USE_FRM SQL statement, which performs the whole procedure automatically. There is also no possibility of unwanted interaction between a utility and the server, because the server does all the work when you use REPAIR TABLE.

Stage 4: Very difficult repair

You should reach this stage only if the .frm description file has also crashed. That should never happen, because the description file is not changed after the table is created:

  1. Restore the description file from a backup and go back to Stage 3. You can also restore the index file and go back to Stage 2. In the latter case, you should start with myisamchk -r.

2. If you do not have a backup but know exactly how the table was created, create a copy of the table in another database. Remove the new data file, and then move the .frm description and .MYI index files from the other database to your crashed database. This gives you new description and index files, but leaves the .MYD data file alone. Go back to Stage 2 and attempt to reconstruct the index file.

How to AutoGen Users and passwd in RT3

How to auto generate users and passwords while submitting tickets through email in Request Tracker 3.

Add this code to AutoReply Template:

*RT::User::GenerateRandomNextChar = \&RT::User::_GenerateRandomNextChar;

if (($Transaction->CreatorObj->id != $RT::Nobody->id) &&
(!$Transaction->CreatorObj->Privileged) &&
($Transaction->CreatorObj->__Value('Password') eq '*NO-PASSWORD*')
) {

my $user = RT::User->new($RT::SystemUser);
my ($stat, $pass) = $user->SetRandomPassword();

if (!$stat) {
$OUT .=

"An internal error has occurred. RT was not able to set a password for you.
Please contact your local RT administrator for assistance.";


$OUT .= "
You can check the current status and history of your requests at:


When prompted, enter the following username and password:

Username: ".$user->Name."
Password: ".$pass."


Clearing Mason Cache:

 shell> rm -rf /opt/rt3/var/mason_data/obj/*

How to migrate MediaWiki?

MediaWiki Migration

Old Server:

 mysqldump -u root -p wikidb > wikidb.sql
 tar -cvf wiki.tar wiki ;this is the wiki folder on document root

New Server:

 create database wikidb;  this is inside mysql, Note that both mysql versions should be same.
 grant create, select, insert, update, delete, lock tables on wikidb.* to wiki@localhost identified by 'YourPassword' ;

MediaWiki Upgrade

 copy all the new files to wiki folder and then
 run php update.php from maintenance folder after updating AdminSettings.php

Qemu virtualization

Qemu Live CD Configurations:

 $qemu -cdrom /dev/cdrom -boot d
 $qemu -cdrom xxx.iso -boot d
 $dd if=/dev/zero of=my_hdd.img bs=1024 count=2048000
 $qemu -cdrom /dev/cdrom -hda my_hdd.img -boot d

Simple NFS in Linux

At the server Side: 

vi /etc/exportfs
 path (ro)
 exportfs -a
 service portmap start
 service nfs start

Thursday, April 12, 2007

Horde another groupware

One of my experiments with Groupware and Webmail systems.



Horde requires some prerequisite software before you can use it. In addition, there are other software packages which, while not required, are recommended as without them you will experience very limited functionality. The following helps you to install the required and recommended software packages on a Fedora Core 4 system.

Apache packages

Horde is a web application, and as such, you need to provide a web server to use it. If you do not already have the Apache web

server installed, you should do so at this time:

 yum install httpd
chkconfig httpd on
/etc/init.d/httpd start

PHP Packages

As Horde is a PHP application, it requires that you have PHP installed. In addition to the base php package, Horde and its applications require several other PHP packages. The following installs the most commonly needed PHP packages.

 yum install php php-xml php-imap php-devel


The Fedora Core PHP package contains a PEAR installation, but it is missing some PEAR modules needed by Horde. You can install these modules using the following command:

 pear install -f Net_IMAP Log Mail_Mime File Date Console_Getopt

Note for Fedora Core 5 you should also install the DB package for pear.

 pear install -f DB

Read the note at: If you've faced this problem then you can download a patched file via:

 pear install 


While a SQL server is not required to run Horde, it is recommended as much of the Horde functionality will be lost without it. You may run either MySQL or PostgreSQL, but you should not run both!

While you do not need to run the SQL server on the same machine that runs the Horde web applications, that is the most common setup for small sites, and hence the following assumes this type of setup.


 yum install php-mysql mysql mysql-server
/sbin/chkconfig --levels 235 mysqld on
/etc/init.d/mysqld start

(You might need more packages depending your installation.)



 yum install postgresql-server php-pgsql postgresql-libs mod_auth_pgsql postgresql
/sbin/chkconfig --levels 235 postgresql
/etc/init.d/postgresql start


The instructions below install Horde and its applications from CVS. In order to use CVS, you will need to have the cvs package installed in your machine. The following command can be used to install the cvs package.

yum install cvs


The following commands can be used to install Horde along with the more popular Horde applications, using anonymous CVS. There are other ways to install Horde and its applications other than CVS. However, this documentation only covers using CVS for installation.

 cd /var/www/html
cvs -d login
Password: horde
cvs -d checkout horde
cd horde
cvs -d checkout framework imp kronolith mnemo nag passwd turba ingo
cd framework
pear channel-discover
php install-packages.php
mkdir -p /var/horde/vfs
chown -R apache:apache /var/horde


Once all the software is installed, you need to configure it for use with Horde. Below is some information on how to configure the various software packages. Note that configuration will vary depending on your needs, and the following is just a basic guide; you may need to adjust your configuration for your needs.


Before you can use the MySQL server with Horde, you must setup the SQL server and create the needed database tables. Create a MySQL account

First, you need to create a SQL user. In the instructions below, replace 'password' with the actual password you want to set for this account.

 mysqladmin -u root password 'password'
mysqladmin -u root -h password 'password'

Creating the MySQL Database and Tables

Next, you need to create the database and its tables. First, you must edit the database scripts Horde provides to set the database password to the password you set in the previous step.

 cd /var/www/html/horde/scripts/sql
vi create.mysql.sql

Then change the database password in the file, and save it. Once you have set the password correctly in the script, you should run the script in order to create the database:

 mysql -u root -p < create.mysql.sql


Before you can use the PostgreSQL server with Horde, you must setup the SQL server and create the needed database tables.

 cd /var/www/html/horde/scripts/sql
vi pgsql_create.sql

Then change the database password in the file and save it. Once you have set the password correctly in the script, you should run the script in order to create the database:

 psql -d template1 -f pgsql_create.sql -U postgres
psql -d horde -U horde -f auth.sql
psql -d horde -U horde -f category.sql
psql -d horde -U horde -f prefs.sql

Note that you may see some NOTICE messages from PostreSQL noting that implicit indexes have been created; these are normal and can be ignored.


First, you need to install the distribution default configuration files, present in the config subdirectory within each Horde application (including the base Horde configuration directory itself):

 cd /var/www/html/horde

for a in . mnemo nag turba imp ingo kronolith passwd; do cd /var/www/html/horde/$a/config; for f in *.dist; do cp $f `basename $f .dist`; done; done

Next, we want to make sure that all the files have the correct file permissions:

 cd /var/www/html
chown -R apache:apache horde
chmod -R o-rwx horde

Finally, you now need to do the basic configuration of all the Horde applications using the Horde Administrative Interface . Log in to your Horde installation, at Once you're in, click on the Administration link on the sidebar, then the Setup sub-option. The Default Administrator password is mailadmin. You should see a list of available Horde applications in the main frame - you now need to go through this list and configure each Horde application as you please. Click on an entry in this list; you should be brought to a configuration screen. Go through each tab within this screen (if there are multiple tabs; otherwise there will just be a single page) and change any settings as you see fit (although the default options are usually sufficient if you don't feel comfortable editing all the available variables). Once you have finished configuring an application, click on the Generate XXX Configuration button at the bottom of the page to auto-generate the relevant conf.php file for the specific application. Repeat this process for every application in the Setup page.

Note that the above only configures the base configuration of the applications. There are other configuration files which you may also want to configure for each application. Such configuration must be done by hand. See the docs/INSTALL file for each application for more information on configuring that application.

How to configure proxy for common linux apps


to use a proxy with PEAR, you should use

 $ pear config-set http_proxy http://proxypc.localdomain 


For yum to work you have to add these settings to /etc/yum.conf

 export http_proxy=
export ftp_proxy=


For wget to work add this to ~./bash_profile

 export http_proxy=
export ftp_proxy=

then run command

 source ~./bash_profile

How to add a disk to LVM


Quick Notes First:

Formatting the new Disk

Suppose the Disk is /dev/sdb, the second scsi disk,

   fdisk /dev/sdb
   create as many partitions as you need using command n
   Label them with command t as 8e for making it Linux LVM
   Write and Exit with the command w.

Format the partitions you require using mkfs command

   mkfs -t ext3 -c /dev/sdb1

LVM commands

   pvcreate /dev/sdb1
   vgextend VolGroup00 /dev/sdb1
   lvextend -L 15G /dev/VolGroup00/LogVol01 ;for extending LogVol to 15GB
   lvextend -L+1G /dev/VolGroup00/LogVol01 ;for adding one more GB to Logical Volume LogVol01
   ext2online /dev/VolGroup00/LogVol01 ;for resizing the Logical Volumes

Thats it finished

Extra Instructions

Creating Physical Volumes for LVM

Since LVM requires entire Physical Volumes to be assigned to Volume Groups, you must have a few empty partitions ready to be used by LVM. Install the OS on a few partitions and leave a bit of empty space. Use fdisk under Linux to create a number of empty partitions of equal size. You must mark them with fdisk as type 0xFE. We created five 256MB partitions, /dev/hda5 through /dev/hda9.

Registering Physical Volumes

The first thing necessary to get LVM running is to register the physical volumes with LVM. This is done with the pvcreate command. Simply run pvcreate /dev/hdxx for each hdxx device you created above. In our example, we ran pvcreate /dev/hda5 and so on.

Creating a Volume Group

Next, create a Volume Group. You can set certain parameters with this command, like physical extent size, but the defaults are probably fine. We'll call the new Volume Group vg01. Just type vgcreate vg01 /dev/hda5.

When this is done, take a look at the Volume Group with the vgdisplay command. Type vgdisplay -v vg01. Note that you can create up to 256 LVs, can add up to 256 PVs, and each LV can be up to 255.99GBs! More important, note the Free PE line. This tells you how many Physical Extents we have to work with when creating LVs. For a 256MB disk, this reads 63 because there is an unused remainder smaller than the 4MB PE size.

Creating a Logical Volume

Next, let's create a Logical Volume called lv01 in VG vg01. Again, there are some settings that may be changed when creating an LV, but the defaults work fine. The important choice to make is how many Logical Extents to allocate to this LV. We'll start with 4 for a total size of 16MB. Just type lvcreate -l4 -nlv01 vg01. You may also specify the size in MBs by using -L instead of -l, and LVM will round off the result to the nearest multiple of the LE size.

Take a look at your LV with the lvdisplay command by typing lvdisplay -v /dev/vg01/lv01. You can ignore the page of Logical extents for now, and page up to see the more interesting data.

Adding a disk to the Volume Group

Next, we'll add /dev/hda6 to the Volume Group. Just type vgextend vg01 /dev/hda6 and you're done! You can check this out by using vgdisplay -v vg01. Note that there are now a lot more PEs available!

Moving Creating a striped Logical Volume

Note that LVM created your whole Logical Volume on one Physical Volume within the Volume Group. You can also stripe an LV across two Physical Volumes with the -i flag in lvcreate. We'll create a new LV, lv02, striped across hda5 and hda6. Type lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5 /dev/hda6. Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command tells it to stripe it across the two.

You now have an LV striped across two PVs!

Moving data within a Volume Group

Up to now, PEs and LEs were pretty much interchangable. They are the same size and are mapped automatically by LVM. This does not have to be the case, though. In fact, you can move an entire LV from one PV to another, even while the disk is mounted and in use! This will impact your performance, but it can prove useful.

Let's move lv01 to hda6 from hda5. Type pvmove -n/dev/vg01/lv01 /dev/hda5 /dev/hda6. This will move all LEs used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to hda6. It takes a while, but when it's done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now resides entirely on /dev/hda6!

Removing a Logical Volume from a Volume Group

Let's say we no longer need lv02. We can remove it and place its PEs back in the empty pool for the Volume Group. First, unmounting its filesystem. Next, deactivate it with lvchange -a n /dev/vg01/lv02. Finally, delete it by typing lvremove /dev/vg01/lv02. Look at the Volume Group and notice that the PEs are now unused.

Removing a disk from the Volume Group

You can also remove a disk from a volume group. We aren't using hda5 anymore, so we can remove it from the Volume Group. Just type vgreduce vg01 /dev/hda5 and it's gone!