Wednesday, December 19, 2007
foss.in
Two weeks back i attended this Open Source conference called FOSS.IN. I wanted to attend at the least three days out of four having very interesting talks but could make it only for one and a half day. The crowd was very techy and this conference is a must attend if you have anything to do with opensource in india.
One of the snaps is of a kernel guy who could not stand at one place, so difficult to catch a snap that i have to take a video ;). His talk was really good.
Wednesday, November 14, 2007
Third IEEE International Conference on e-Science and Grid Computing
Third IEEE International Conference on e-Science
and Grid Computing
December 10-13, 2007, Bangalore, India
http://www.escience 2007.org
Sponsored By:
IEEE Computer Society's Technical Committee on Scalable Computing
http://www.ieeetcsc.org
Organised/Supported by:
Centre for Development of Advanced Computing, India
The University of Melbourne, Australia
Indiana University, USA
LSU Center for Computation & Technology, USA
EuroIndiaGrid Project
OMII (Open Middleware Infrastructure Institute), UK
Microsoft Corporation
Hewlett Packard (HP)
************ *********
********* ********* ********* ********* ********* *********
UPCOMING DEADLINES:
* Advance Registration Deadline: Nov 7, 2007
PROGRAM HIGHLIGHTS:
------------ ------
The e-Science 2007 conference, sponsored by the IEEE Computer Society's Technical Committee for Scalable Computing (TCSC), is designed to bring together leading international and interdisciplinary research communities, developers, and users of e-Science applications and enabling IT technologies. The conference serves as a forum to present the results of the latest research and product/tool developments, and highlight related activities from around the world.
Keynotes:
---------
The conference features plenary keynote speakers drawn from Europe, North America, and Asia
The conference also features technical talks from industries.
Contributed Papers:
------------ -------
The Program Committee has selected 60 top quality research papers out of 206 submissions from all over the world for presentation at the conference.
Workshops:
----------
* OGF (Open Grid Forum) Workshop on eScience Highlights
* Innovative and Collaborative Problem Solving Environment in Distributed Resources
* Scientific Workflows and Business Workflow Standards in e-Science
* International Grid Interoperability and Interoperation Workshop
Posters and Research Demos:
------------ --------- ------
The conference features 21 posters and 5 "live" research demons selected from submissions
from all over the world.
Tutorials:
----------
1. Introduction to Globus Toolkit GT4
Presenter: Ravi Madduri, Argonne National Laboratory, USA
2. Market-based Grid Computing and the Gridbus Middleware
Presenter: Rajkumar Buyya, The University of Melbourne, Australia
3. Autonomic Grid Computing
Presenter: Manish Parashar, Rutgers University (USA) Omer Rana, Cardiff University (UK)
4. Applications enablement on Grid Presenters: Mangala and Prahlad Rao, C-DAC, India
The exhibition session will consist of exhibits/presentations from vendor companies and R&D laboratories.
TRAVEL SCHOLARSHIPS FOR STUDENTS:
------------ --------- --------- ---
The IEEE Technical Committee on Scalable Computing, The University of Melbourne, and C-DAC sponsored travel support is being offered to students. All eligible research degree students are encouraged to apply to one of the following related scholarships:
1. International Students (TCSC supported):
http://www.ieeetcsc .org/young/ eScience07/ TCSCgrant.html
2. India-based Students (TCSC and Uni. of Melbourne supported):
http://www.ieeetcsc .org/young/ eScience07/TCSC-UnimelbGrant.html
3. C-DAC supported (For Indian students only):
http://www.escience 2007.org/ scholarship.asp
CONFERENCE REGISTRATION:
------------ --------- ---
The conference registration includes attendance to all e-Science (1) workshops, (2) tutorials, (3) technical sessions, (4) posters and research demo, (5) exhibits and (6) a copy of the conference proceedings published by the IEEE Computer Society.
============ ========= ========= ========= =========
Wednesday, October 31, 2007
JavaOne Conference
Powered by ScribeFire.
Tuesday, October 09, 2007
ZFS
http://drewthaler.blogspot.com/2007/10/don-be-zfs-hater.html
ZFS rocks
Powered by ScribeFire.
Wednesday, September 26, 2007
No longer a scientist
Powered by ScribeFire.
Tuesday, May 01, 2007
Format of passwd and shadow files
Format of the /etc/passwd file
A non-shadowed /etc/passwd
file has the following format:
Where:
username:passwd:UID:GID:full_name:directory:shell
username
The user (login) name
passwd
The encoded password
UID
Numerical user ID
GID
Numerical default group ID
full_name
The user's full name - Actually this field is called the GECOS (General Electric Comprehensive Operating System) field and can store information other than just the full name. The Shadow commands and manual pages refer to this field as the comment field.
directory
User's home directory (Full pathname)
shell
User's login shell (Full Pathname)
Where
username:Npge08pfz4wuk:503:100:Full Name:/home/username:/bin/sh
Np
is the salt and ge08pfz4wuk
is the encoded password. The encoded salt/password could just as easily have been kbeMVnZM0oL7I
and the two are exactly the same password. There are 4096 possible encodings for the same password. (The example password in this case is 'password', a really bad password). Once the shadow suite is installed, the /etc/passwd
file would instead contain:
The
username:x:503:100:Full Name:/home/username:/bin/sh
x
in the second field in this case is now just a place holder. The format of the /etc/passwd
file really didn't change, it just no longer contains the encoded password. This means that any program that reads the /etc/passwd
file but does not actually need to verify passwords will still operate correctly. The passwords are now relocated to the shadow file (usually /etc/shadow
file).
Format of the shadow file
The /etc/shadow
file contains the following information:
Where:
username:passwd:last:may:must:warn:expire:disable:reserved
username
The User Name
passwd
The Encoded password
last
Days since Jan 1, 1970 that password was last changed
may
Days before password may be changed
must
Days after which password must be changed
warn
Days before password is to expire that user is warned
expire
Days after password expires that account is disabled
disable
Days since Jan 1, 1970 that account is disabled
reserved
A reserved field
username:Npge08pfz4wuk:9479:0:10000::::
Configuring Quota on Linux
Configuration of disk usage quotas on Linux - Perform the following as root:
- Edit file /etc/fstab to add qualifier "usrquota" or "grpquota" to the partition. The following file system mounting options can be specified in /etc/fstab: grpquota, noquota, quota and usrquota. (These options are also accepted by the mount command but ignored.) The filesystem when mounted will show up in the file /etc/mtab, the list of all currently mounted filesystems.)
- To enable user quota support on a file system, add "usrquota" to the fourth field containing the word "defaults".
-
...
/dev/hda2 /home ext3 defaults,usrquota 1 1
...
-
- Replace "usrquota" with "grpquota", should you need group quota support on a file system.
-
...
/dev/hda2 /home ext3 defaults,grpquota 1 1
...
-
- Need both user quota and group quota support on a file system?
-
...
/dev/hda2 /home ext3 defaults,usrquota,grpquota 1 1
...
-
- To enable user quota support on a file system, add "usrquota" to the fourth field containing the word "defaults".
- touch /partition/aquota.user
where the partition might be /home or some partition defined in /etc/fstab.
then
chmod 600 /partition/aquota.userThe file should be owned by root. Quotas may also be set for groups by using the file aquota.group
Quota file names:
- Quota Version 2 (Linux 2.4/2.6 kernel: Red Hat 7.1+/8/9,FC 1-3): aquota.user, aquota.group
- Quota Version 1 (Linux 2.2 kernel: Red Hat 6, 7.0): quota.user, quota.group
- Re-boot or re-mount file partition with quotas.
- Re-boot: shutdown -r now
- Re-mount partition: mount -o remount /partition
After re-booting or re-mounting the file system, the partition will show up in the list of mounted filesystems as having quotas. Check /etc/mtab:-
...
/dev/hda5 / ext3 rw,usrquota 0 0
...
- quotacheck -vgum /partition
or
quotacheck -vguma- For example (Linux kernel 2.4+: Red Hat 7.1+, Fedora): quotacheck -vguma
quotacheck: WARNING - Quotafile //aquota.user was probably truncated. ...
quotacheck: Scanning /dev/hda5 [/] done
quotacheck: Checked 9998 directories and 179487 files
- For example (Linux kernel 2.2: Red Hat 6/7.0): quotacheck -v /dev/hda6
System response:Scanning /dev/hda6 [/home] done
Checked 444 directories and 3136 files
Using quotafile /home/quota.user
Quotacheck is used to scan a file system for disk usages, and updates the quota record file "quota.user/aquota.user" to the most recent state. It is recommended thet quotacheck be run at bootup (part of Redhat default installation)
Man page: quotacheck - scan a filesystem for disk usage, create, check and repair quota files
- For example (Linux kernel 2.4+: Red Hat 7.1+, Fedora): quotacheck -vguma
- quotaon -av
System Response: /dev/hda6: user quotas turned onquotaon - enable disk quotas on a file system.
quotaoff - turn off disk quotas for a file system.Man page: quotaon - turn filesystem quotas on and off
- edquota -u user_id
Edit directly using vi editor commands. (See below for more info.)
For example: edquota -u user1- System Response (RH 7+):
Disk quotas for user user1 (uid 501):
Filesystem blocks soft hard inodes soft hard
/dev/hda5 1944 0 0 120 0 0
- blocks: 1k blocks
- inodes: Number of entries in directory file
- soft: Max number of blocks/inodes user may have on partition before warning is issued and grace persiod countdown begins.
If set to "0" (zero) then no limit is enforced. - hard: Max number of blocks/inodes user may have on partition.
If set to "0" (zero) then no limit is enforced.
- System Response (RH 6):
Quotas for user user1:
Something failed if you get the response:
/dev/sdb6: blocks in use: 56, limits (soft = 0, hard = 0)
inodes in use: 50, limits (soft = 0, hard = 0)
/dev/sdb6: blocks in use: 0, limits (soft = 0, hard = 0)
Edit limits:
inodes in use: 0, limits (soft = 0, hard = 0)
Quotas for user user1:
/dev/hda6: blocks in use: 992, limits (soft = 50000, hard = 55000)
inodes in use: 71, limits (soft = 10000, hard = 11000)
If editing group quotas: edquota -g group_name
Man page: edquota - edit user quotas
- System Response (RH 7+):
- List quotas:
quota -u user_idFor example: quota -u user1
System response:Disk quotas for user user1 (uid 501):
If this does not respond similar to the above, then restart the computer: shutdown -r now
Filesystem blocks quota limit grace files quota limit grace
/dev/hda6 992 50000 55000 71 10000 11000
Man page: quota - display disk usage and limits
Quota Reports
- Report on all users over quota limits: quota -q
- Quota summary report: repquota -a
*** Report for user quotas on device /dev/hda5
No limits shown with this user as limits are set to 0.
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 4335200 0 0 181502 0 0
bin -- 15644 0 0 101 0 0
...
user1 -- 1944 0 0 120 0 0
Man page: repquota - summarize quotas for a filesystem.
Cron:
Quotacheck should scan the file system via cronjob periodically (say, every week?). Add a script to the /etc/cron.weekly/ directory.
File: /etc/cron.weekly/runQuotacheck
- Linux Kernel 2.4: Red Hat 7.1 - Fedora Core 3:
-
#!/bin/bash
/sbin/quotacheck -vguma
-
- Linux Kernel 2.2: Red Hat 6/7.0:
-
#!/bin/bash
/sbin/quotacheck -v -a
-
(Remember to chmod +x /etc/cron.weekly/runQuotacheck)
EdQuota Notes:
The "edquota" command puts you into a "vi" editing mode so knowledge of the "vi" editor is necessary. Another editor may be specified with the EDITOR environment variable. You are NOT editing the quota.user file directly. The /partition/quota.user or quota.group file is a binary file which you do not edit directly. The command edquota gives you an ascii interface with the text prepared for you. When you ":wq" to save the file from the vi session, it is converted to binary by the edquota command and stored in the quota.user file.
Assigning quota for a bunch of users with the same value. To rapidly set quotas for all users, on my system to the same value as user user1, I would first edit user user1's quota information by hand, then execute:
edquota -p user1 `awk -F: '$3 > 499 {print $1}' /etc/passwd`
This assumes that the user uid's start from 500 and increment upwards. "blocks in use" is the total number of blocks (in kilobytes) a user has comsumed on a partition. "inodes in use" is the total number of files a user has on a partition.
edquota options:
Option | Description |
---|---|
-r -m | Edit quotas on remote server using RPC. Remote server must be configured with the daemon rpc.rquotad |
-u | Edit user quota |
-g | Edit group quota |
-p user-id | Duplicate the quotas based on existing prototype user |
-F format -F vfsold -F vfsv0 -F rpc -F xfs | Format: vfsold - version 1 vfsv0 - version 2 rpc - quotas over NFS xfs - quotas for XFS filesystem |
-f /file-system | Perform on specified filesystem. Default is to apply on all filesystems with quotas |
-t | Edit the soft time limits for each filesystem. |
-T | Edit time for user/group when softlimit is enforced. Specify number and unit or "unset" |
Soft Limit and Hard Limits:
- Soft limit indicates the maximum amount of disk usage a quota user has on a partition. When combined with "grace period", it acts as the border line, which a quota user is issued warnings about his impending quota violation when passed. Hard limit works only when "grace period" is set. It specifies the absolute limit on the disk usage, which a quota user can't go beyond his "hard limit".
Grace Period:
- "Grace Period" is configured with the command "edquota -t", "grace period" is a time limit before the "soft limit" is enforced for a file system with quota enabled. Time units of sec(onds), min(utes), hour(s), day(s), week(s), and month(s) can be used. This is what you'll see with the command "edquota -t":
System response:
- Linux Kernel 2.4+: Red Hat 7.1+/Fedora:
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
Filesystem Block grace period Inode grace period
/dev/hda5 7days 7days - Linux Kernel 2.2: Red Hat 6/7.0:
Time units may be: days, hours, minutes, or seconds
Grace period before enforcing soft limits for users:
/dev/hda2: block grace period: 0 days, file grace period: 0 days
Change the 0 days part to any length of time you feel reasonable. A good choice might be 7 days (or 1 week).
- Linux Kernel 2.4+: Red Hat 7.1+/Fedora:
Quota files: (non-XFS file systems)
- The edquota command will create/edit the quota file at the root of the file system. (See /etc/mtab for the list of the currently mounted filesystems.)
- Version 2: aquota.user, aquota.group
- Version 1: quota.user, quota.group
Self Signed SSL certificates
Use self-signed certificates to test single systems, such as a test web server. Self-signed certificates become impractical in any other case. A local CA, while more complex to setup, reduces the number of keys that need to be distributed for verification, and properly replicates a real world certificate environment.
Creation of certificates requires the openssl utility. This command should be part of an OpenSSL installation, though may be installed out of the standard search path in /usr/local/ssl/bin or elsewhere.
$ which openssl
/usr/bin/openssl
- Generate the Rivest, Shamir and Adleman (RSA) key
- Create the Certificate
- Extract Metadata (Optional)
- Combine Key and Certificate Data (Optional)
OpenSSL can generate a Digital Signature Algorithm (DSA) key (with the gendsa option), though for compatibility RSA keys are most frequently used. Learn more about the genrsa option to openssl.
$ openssl genrsa 1024 > host.key
$ chmod 400 host.key
Modern systems should provide a random device and sufficient entropy for key generation. The data in the host.key file must be protected, as anyone with this information can decrypt traffic encrypted with this key.
Learn more about the req option to openssl. The -new, -x509 and -nodes arguments are required to create an unencrypted certificate. The -days argument specifies how long the certificate will be valid for.
$ openssl req -new -x509 -nodes -sha1 -days 365 -key host.key > host.cert
Questions may be asked to fill out the certificate’s x509 attributes. The answers should be adjusted for the locale:
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Washington
Locality Name (eg, city) []:Seattle
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sial.org
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:mail.example.org
Email Address []:postmaster@example.org
The Common Name field usually must exactly match the hostname of the system the certificate will be used on; otherwise, clients should complain about a certificate to hostname mismatch.
The certificate data in the host.cert file does not need to be protected like the private key file does. In fact, it will likely need to be transferred to all the client systems that need to verify the key of the server being connected to. If this is the case, setup a CA, and distribute the signing certificate to the clients instead of each self-signed certificate.
Optionally, various certificate metadata can be saved for quick reference, for example to verify the key fingerprint. Learn more about the x509 option to openssl.
$ openssl x509 -noout -fingerprint -text <> host.info
Some applications may require that the key and certificate data be in a single file. I recommend keeping the key and certificate data separate if possible, as the key data needs to be protected, and the certificate data available to all. Combining the data means the resulting file must be protected like a key file.
$ cat host.cert host.key > host.pem \
&& rm host.key
$ chmod 400 host.pem
The host.cert certificate data will need to be exported to client systems for use in testing.
The openssl.cnf file
Localize the system openssl.cnf to include relevant X509 attributes of the certificate. This will save typing and avoid errors when creating certificates. The location of this file varies by system.
$ grep Name_default /etc/ssl/openssl.cnf
countryName_default = US
stateOrProvinceName_default = Washington
0.organizationName_default = Sial.org
#1.organizationName_default = World Wide Web Pty Ltd
#organizationalUnitName_default =
Friday, April 27, 2007
UW to Dovecot migration
Configuration primer for a Migration from UW
IMAP with pine, Thunderbird and squirrelmail as client. For compatibility
the mbox format is used.
A description for pine with imap acces without entering any password is
also discussed. Please use dovecot 1.0beta1 or later for correct handling
with pine.
Short overview of Mail folders:
===============================
| Used | elm | pine | Thunderbird | squirrelmail | UW imapd | dovecot |
Base directoy | ~/Mail | ~/Mail | ~/mail | as configured | mail | | |
Sent Folder | ~/Mail/sent | ~/Mail/sent | ~/mail/sent-mail | Sent | Sent | | |
Trash Folder | ~/Mail/Trash | | - | Trash | Trash | | |
Drafts Folder | ~/Mail/Drafts | | saved-messages | Drafts | Drafts | | |
Templates | ~/Mail/Templates | | | Templates | | | |
Unsent Folder | wie Thunderbird | | | Local Folders/Unsent Messages | | | |
Postponed | | | postponed-msgs | | | | |
Canceled Mail | | ~/Canceled.mail | ~/dead.letter | | | | |
Personal namespace | | | | | | | |
Public namespace | | | | | | #news | |
Other Users | | | | | | | |
pine setup:
===========
SETUP(S)/collectionLists(L)/Mail
Just add the following to Server:
localhost/notls
Before:
Nickname : Mail
Server :
Path : Mail/
View :
After:
Nickname : Mail
Server : localhost/notls
Path : Mail/
View :
In ~/.pinerc modify the following configuration parameters:
mail-check-interval=15
rsh-open-timeout=30000
rsh-path=
rsh-command=/usr/sbin/dovecot --exec-mail imap
# For large Mailboxes
tcp-read-warning-timeout=180
For details have a look at:
http://www.unix.org.ua/orelly/networking_2ndEd/ssh/ch11_03.htm
http://www.cs.unc.edu/cgi-bin/howto?howto=pine-imap
http://www.ii.com/internet/messaging/pine/
http://www.umanitoba.ca/acn/docs/pine/pine-imap.html
Migration from UW Imapd to dovecot:
===================================
Disable UW Imapd in xinetd
http://wiki.dovecot.org/Migration
Migrate Mailboxes:
http://wiki.dovecot.org/uw2dovecot.sh
or
cd $USER
cp .mailboxlist .subscriptions
dovecot configuration:
Config (/etc/dovecot.conf):
UW Imapd compatible
protocols = imaps
default_mail_env = mbox:~:INBOX=/var/mail/%u
mail_full_filesystem_access = yes
mbox_read_locks = fcntl
mbox_write_locks = fcntl
mbox_lazy_writes=no
Self signed Certificate for SSL:
cd /etc/pki/dovecot/private
openssl genrsa -out dovecot.pem 2048
openssl req -new -x509 -nodes -sha1 -days 3650 -key dovecot.pem >../dovecot.pem
Enter the data for the certificate
http://sial.org/howto/openssl/self-signed/
dovecot debugging:
==================
Config (/etc/dovecot.conf):
#GW:
mail_executable = /usr/libexec/dovecot/rawlog /usr/libexec/dovecot/imap
Directory ~/dovecot.rawlog must exist and the input/output will be logged
there
For ethereal debugging use the following dovecot configuration:
Sniffing:
#GW:
protocols = imap imaps
#GW:
disable_plaintext_auth = no
maildir/mbox documentation:
===========================
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=fw&db=man&fname=/usr/freeware/catman/u_man/cat5/mbox.Z
http://people.redhat.com/rkeech/maildir-migration.txt
squirrelmail:
=============
$use_imap_tls = true;
$imapPort = 993;
$imap_server_type ='dovecot';
$optional_delimiter = 'detect';
$force_username_lowercase = true;
$default_folder_prefix = '~/Mail/';
$sent_folder = 'sent';
$show_prefix_option = false;
$show_contain_subfolders_option = false;
Thunderbird Plugins
===================
With imap Folder the Xpunge plugin is very usefull to have consitent
mailboxes.
Xpunge
https://addons.mozilla.org/extensions/moreinfo.php?application=thunderbird&category=Top%20Rated&numpg=10&id=1279
http://www.cs.ualberta.ca/~tegos/mozilla/tb/
Securing Apache through SSL
SSL Configuration
The previous sections introduced the (not-so-basic) concepts behind SSL and you have learned how to generate keys and certificates. Now, finally, you can configure Apache to support SSL. mod_ssl must either be compiled statically or, if you have compiled as a loadable module, the appropriate LoadModule directive must be present in the file.
If you compiled Apache yourself, a new Apache configuration file, named ssl.conf, should be present in the conf/ directory. That file contains a sample Apache SSL configuration and is referenced from the main httpd.conf file via an Include directive.
If you want to start your configuration from scratch, you can add the following configuration snippet to your Apache configuration file:
Listen 80
Listen 443
ServerName http://www.example.com
SSLEngine on
SSLCertificateFile \
/usr/local/ssl/install/openssl/certs/http://www.example.com.cert
SSLCertificateKeyFile \
/usr/loca/ssl/install/openssl/certs/http://www.example.com.key
With the previous configuration, you set up a new virtual host that will listen to port 443 (the default port for HTTPS) and you enable SSL on that virtual host with the SSLEngine directive.
You need to indicate where to find the server's certificate and the file containing the associated key. You do so by using SSLCertificateFile and SSLCertificateKeyfile directives.
Starting the Server
Now you can stop the server if it is running, and start it again. If your key is protected by a pass phrase, you will be prompted for it. After this, Apache will start and you should be able to connect securely to it via the https://http://www.example.com/ URL.
If you compiled and installed Apache yourself, in many of the vendor configuration files, you can see that the SSL directives are surrounded by an
If you are unable to successfully start your server, check the Apache error log for clues about what might have gone wrong. For example, if you cannot bind to the port, make sure that another Apache is not running already. You must have administrator privileges to bind to port 443; otherwise, you can change the port to 8443 and access the URL via https://http://www.example.com:8443.
Configuration Directives
mod_ssl provides comprehensive technical reference documentation. This information will not be reproduced here; rather, I will explain what is possible and which configuration directives you need to use. You can then refer to the online SSL documentation bundled with Apache for the specific syntax or options.
Algorithms
You can control which ciphers and protocols are used via the SSLCipherSuite and SSLProtocol commands. For example, you can configure the server to use only strong encryption with the following configuration:
SSLProtocol all
SSLCipherSuite HIGH:MEDIUM
See the Apache documentation for a detailed description of all available ciphers and protocols.
Client Certificates
Similarly to how clients can verify the identity of servers using server certificates, servers can verify the identity of clients by requiring a client certificate and making sure that it is valid.
SSLCACertificateFile and SSLCACertificatePath are two Apache directives used to specify trusted Certificate Authorities. Only clients presenting certificates signed by these CAs will be allowed access to the server.
The SSLCACertificateFile directive takes a file containing a list of CAs as an argument. Alternatively, you could use the SSLCACertificatePath directive to specify a directory containing trusted CA files. Those files must have a specific format, described in the documentation. SSLVerifyClient enables or disables client certificate verification. SSLVerifyDepth controls the number of delegation levels allowed for a client certificate. The SSLCARevocationFile and SSLCARevocationPath directives enable you to specify certificate revocation lists to invalidate certificates.
Performance
SSL is a protocol that requires intensive calculations. mod_ssl and OpenSSL allow several ways to speed up the protocol by caching some of the information about the connection. You can cache certain settings using the SSLSessionCache and SSLSessionCacheTimeout directives. There is also built-in support for specialized cryptographic hardware that will perform the CPU-intensive computations and offload the main processor. The SSLMutex directive enables you to control the internal locking mechanism of the SSL engine. The SSLRandomSeed directive enables you to specify the mechanism to seed the random-number generator required for certain operations. The settings of both directives can have an impact on performance.
Logging
mod_ssl hooks into Apache's logging system and provides support for logging any SSL-related aspect of the request, ranging from the protocol used to the information contained in specific elements of a client certificate. This information can also be passed to CGI scripts via environment variables by using the StdEnvVars argument to the Options directive. You can get a listing of the available SSL variables at http://httpd.apache.org/docs-2.0/ssl/ssl_compat.html.
The SSLOptions Directive
Many of these options can be applied in a per-directory or per-location basis. The SSL parameters might be renegotiated for those URLs. This can be controlled via the SSLOptions directive.
The SSLPassPhraseDialog directive can be used to avoid having to enter a pass phrase at startup by designating an external program that will be invoked to provide it.
Access Control
The SSLRequireSSL directive enables you to force clients to access the server using SSL. The SSLRequire directive enables you to specify a set of rules that have to be met before the client is allowed access. SSLRequire syntax can be very complex, but itallows an incredible amount of flexibility. The example shows a sample configuration from the mod_ssl documentation that restricts access based on the client certificate and the network the request came from. Access will be granted if one of the following is met:
The SSL connection does not use an export (weak) cipher or a NULL cipher, the certificate has been issued by a particular CA and for a particular group, and the access takes place during workdays (Monday to Friday) and working hours (8:00 a.m. to 8:00 p.m.).
The client comes from an internal, trusted network.
You can check the documentation for SSLRequire for a complete syntax reference.
SSLRequire Example
SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)-/ \
and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \
or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
Reverse Proxy with SSL
Although at the time this book was written the SSL reverse proxy functionality was not included in mod_ssl for Apache 2.0, it is likely to be included in the future. That functionality enables you to encrypt the reverse proxy connection to backend servers and to perform client and server certificate authentication on that connection. The related directives are SSLProxyMachineCertificatePath, SSLProxyMachineCertificateFile, SSLProxyVerify, SSLProxyVerifyDepth, SSLProxyCACertificatePath, SSLProxyEngine, and SSLProxyCACertificateFile. Their syntax is similar to their regular counterparts.Monday, April 16, 2007
BBCP another High Bandwith File Transfer Utility
Usage
To transfer the local file /local/path/largefile.tar to the remote system remotesystem as /remote/path/largefile.tar:
bbcp -P 2 -V -w 8m -s 16 /local/path/largefile.tar remotesystem:/remote/path/largefile.tar
Where:
“-P 2” , produces progress messages every 2 seconds.
“-V” , produces verbose output, including detailed transfer speed statistics.
“-w 8m” , sets the size of the disk I/O buffers.
“-s 16” , sets the number of parallel network streams to 16.
bbcp assumes the remote system’s non-interactive environment contains the path to the bbcp utility. This can be determined by with the following command:
ssh remotesystem which bbcp
If this is not the case the “-T” bbcp option can be used to specify how to start bbcp on the remote system. For example:
bbcp -P 2 -V -w 8m -s 16 -T 'ssh -x -a -oFallBackToRsh=no %I -l %U %H /remote/path/to/bbcp' /local/path/largefile.tar remotesystem:/remote/path/largefile.tar
Often during large transfers the connection between the transfering systems is lost. The “-a” options gives bbcp the ability to pick up where it left off. For example:
bbcp -k -a /remotesystem/homedir/.bbcp/ -P 2 -V -w 8m -s 16 /local/path/largefile.tar remotesystem:/remote/path/largefile.tar
To transfer an entire directory tree,
bbcp -r -P 2 -V -w 8m -s 16 /local/path/* remotesystem:/remote/path
When transferring files to the Cray XT3 (jaguar) at NCCS, it is necessary to specify a particular jaguar node as the destination host because the hostname jaguar.ccs.ornl.gov actually points to a server load balancing device which returns node addresses in a round robin fashion. For example:
bbcp -r -P 2 -V -w 8m -s 16 /local/path/* jaguar3.ccs.ornl.gov:/remote/path
Documentation
More information on bbcp can be found by typing “bbcp -h”
Howto on AutoSetOwner in RT3
This custom action sets owner of the ticket to the current user if nobody yet owns the ticket. You can use this scrip action with any condition you want, for eg On Resolve.
Description: AutoSetOwner
Condition: On Resolve
Action: User Defined
Custom action preparation code:
return 1;
Custom action cleanup code:
# get actor ID
my $Actor = $self->TransactionObj->Creator;
# if actor is RT_SystemUser then get out of here
return 1 if $Actor == $RT::SystemUser->id;
# get out unless ticket owner is nobody
return 1 unless $self->TicketObj->Owner == $RT::Nobody->id;
# ok, try to change owner
$RT::Logger->info("Auto assign ticket #". $self->TicketObj->id ." to user #". $Actor );
my ($status, $msg) = $self->TicketObj->SetOwner( $Actor );
unless( $status ) {
$RT::Logger->error( "Impossible to assign the ticket to $Actor: $msg" );
return undef;
}
return 1;
Template: Global template: Blank
This is a variation on AutoSetOwner , it auto-sets the owner of a ticket only if the person doing the correspondence is in the AdminCc watchers:
Condition: On correspond
Action: User Defined
Template: blank
## based on http://wiki.bestpractical.com/index.cgi?AutoSetOwner
## And testcode ~ line 576 of Queue_Overlay.pm (rt3.4.2)
my $Actor = $self->TransactionObj->Creator;
my $Queue = $self->TicketObj->QueueObj;
# if actor is RT_SystemUser then get out of here
return 1 if $Actor == $RT::SystemUser->id;
# get out unless ticket owner is nobody
return 1 unless $self->TicketObj->Owner == $RT::Nobody->id;
# get out unless $Actor is not part of AdminCc watchers
return 1 unless $Queue->IsWatcher(Type => 'AdminCc', PrincipalId => $Actor);
# do the actual 'status update'
my ($status, $msg) = $self->TicketObj->SetOwner( $Actor );
unless( $status ) {
$RT::Logger->warning( "can't set ticket owner to $Actor: $msg" );
return undef;
}
return 1;
HowTo on repairing MySQL tables
How to Repair Tables
The discussion in this section describes how to use myisamchk on MyISAM tables (extensions .MYI and .MYD).
You can also (and should, if possible) use the CHECK TABLE and REPAIR TABLE statements to check and repair MyISAM tables.
Symptoms of corrupted tables include queries that abort unexpectedly and observable errors such as these:
* tbl_name.frm is locked against change
* Can't find file tbl_name.MYI (Errcode: nnn)
* Unexpected end of file
* Record file is crashed
* Got error nnn from table handler
To get more information about the error, run perror nnn, where nnn is the error number. The following example shows how to use perror to find the meanings for the most common error numbers that indicate a problem with a table:
shell> perror 126 127 132 134 135 136 141 144 145
126 = Index file is crashed / Wrong file format
127 = Record-file is crashed
132 = Old database file
134 = Record was already deleted (or record file crashed)
135 = No more room in record file
136 = No more room in index file
141 = Duplicate unique key or constraint on write or update
144 = Table is crashed and last repair failed
145 = Table was marked as crashed and should be repaired
Note that error 135 (no more room in record file) and error 136 (no more room in index file) are not errors that can be fixed by a simple repair. In this case, you must use ALTER TABLE to increase the MAX_ROWS and AVG_ROW_LENGTH table option values:
ALTER TABLE tbl_name MAX_ROWS=xxx AVG_ROW_LENGTH=yyy;
If you do not know the current table option values, use SHOW CREATE TABLE.
For the other errors, you must repair your tables. myisamchk can usually detect and fix most problems that occur.
The repair process involves up to four stages, described here. Before you begin, you should change location to the database directory and check the permissions of the table files. On Unix, make sure that they are readable by the user that mysqld runs as (and to you, because you need to access the files you are checking). If it turns out you need to modify files, they must also be writable by you.
This section is for the cases where a table check fails, or you want to use the extended features that myisamchk provides.
If you are going to repair a table from the command line, you must first stop the mysqld server. Note that when you do mysqladmin shutdown on a remote server, the mysqld server is still alive for a while after mysqladmin returns, until all statement-processing has stopped and all index changes have been flushed to disk.
Stage 1: Checking your tables
Run myisamchk *.MYI or myisamchk -e *.MYI if you have more time. Use the -s (silent) option to suppress unnecessary information.
If the mysqld server is stopped, you should use the --update-state option to tell myisamchk to mark the table as “checked.”
You have to repair only those tables for which myisamchk announces an error. For such tables, proceed to Stage 2.
If you get unexpected errors when checking (such as out of memory errors), or if myisamchk crashes, go to Stage 3.
Stage 2: Easy safe repair
First, try myisamchk -r -q tbl_name (-r -q means “quick recovery mode”). This attempts to repair the index file without touching the data file. If the data file contains everything that it should and the delete links point at the correct locations within the data file, this should work, and the table is fixed. Start repairing the next table. Otherwise, use the following procedure:
1. Make a backup of the data file before continuing.
2. Use myisamchk -r tbl_name (-r means “recovery mode”). This removes incorrect rows and deleted rows from the data file and reconstructs the index file.
3. If the preceding step fails, use myisamchk --safe-recover tbl_name. Safe recovery mode uses an old recovery method that handles a few cases that regular recovery mode does not (but is slower).
Note: If you want a repair operation to go much faster, you should set the values of the sort_buffer_size and key_buffer_size variables each to about 25% of your available memory when running myisamchk.
If you get unexpected errors when repairing (such as out of memory errors), or if myisamchk crashes, go to Stage 3.
Stage 3: Difficult repair
You should reach this stage only if the first 16KB block in the index file is destroyed or contains incorrect information, or if the index file is missing. In this case, it is necessary to create a new index file. Do so as follows:
1. Move the data file to a safe place.
2. Use the table description file to create new (empty) data and index files:
shell> mysql db_name
mysql> SET AUTOCOMMIT=1;
mysql> TRUNCATE TABLE tbl_name;
mysql> quit
3. Copy the old data file back onto the newly created data file. (Do not just move the old file back onto the new file. You want to retain a copy in case something goes wrong.)
Go back to Stage 2. myisamchk -r -q should work. (This should not be an endless loop.)
You can also use the REPAIR TABLE tbl_name USE_FRM SQL statement, which performs the whole procedure automatically. There is also no possibility of unwanted interaction between a utility and the server, because the server does all the work when you use REPAIR TABLE.
Stage 4: Very difficult repair
You should reach this stage only if the .frm description file has also crashed. That should never happen, because the description file is not changed after the table is created:
1. Restore the description file from a backup and go back to Stage 3. You can also restore the index file and go back to Stage 2. In the latter case, you should start with myisamchk -r.
2. If you do not have a backup but know exactly how the table was created, create a copy of the table in another database. Remove the new data file, and then move the .frm description and .MYI index files from the other database to your crashed database. This gives you new description and index files, but leaves the .MYD data file alone. Go back to Stage 2 and attempt to reconstruct the index file.
How to AutoGen Users and passwd in RT3
How to auto generate users and passwords while submitting tickets through email in Request Tracker 3.
Add this code to AutoReply Template:
{
*RT::User::GenerateRandomNextChar = \&RT::User::_GenerateRandomNextChar;
if (($Transaction->CreatorObj->id != $RT::Nobody->id) &&
(!$Transaction->CreatorObj->Privileged) &&
($Transaction->CreatorObj->__Value('Password') eq '*NO-PASSWORD*')
) {
my $user = RT::User->new($RT::SystemUser);
$user->Load($Transaction->CreatorObj->Id);
my ($stat, $pass) = $user->SetRandomPassword();
if (!$stat) {
$OUT .=
"An internal error has occurred. RT was not able to set a password for you.
Please contact your local RT administrator for assistance.";
}
$OUT .= "
You can check the current status and history of your requests at:
".$RT::WebURL."
When prompted, enter the following username and password:
Username: ".$user->Name."
Password: ".$pass."
";
}
}
Clearing Mason Cache:
shell> rm -rf /opt/rt3/var/mason_data/obj/*
How to migrate MediaWiki?
MediaWiki Migration
Old Server:
mysqldump -u root -p wikidb > wikidb.sql
tar -cvf wiki.tar wiki ;this is the wiki folder on document root
New Server:
create database wikidb; this is inside mysql, Note that both mysql versions should be same.
grant create, select, insert, update, delete, lock tables on wikidb.* to wiki@localhost identified by 'YourPassword' ;
MediaWiki Upgrade
copy all the new files to wiki folder and then
run php update.php from maintenance folder after updating AdminSettings.php
Qemu virtualization
Qemu Live CD Configurations:
$qemu -cdrom /dev/cdrom -boot d
$qemu -cdrom xxx.iso -boot d
$dd if=/dev/zero of=my_hdd.img bs=1024 count=2048000
$qemu -cdrom /dev/cdrom -hda my_hdd.img -boot d
Simple NFS in Linux
At the server Side:
vi /etc/exportfs
path 192.168.0.0/16 (ro)
exportfs -a
service portmap start
service nfs start
Thursday, April 12, 2007
Horde another groupware
One of my experiments with Groupware and Webmail systems.
Horde
Installation
Horde requires some prerequisite software before you can use it. In addition, there are other software packages which, while not required, are recommended as without them you will experience very limited functionality. The following helps you to install the required and recommended software packages on a Fedora Core 4 system.
Apache packages
Horde is a web application, and as such, you need to provide a web server to use it. If you do not already have the Apache web
server installed, you should do so at this time:
yum install httpd
chkconfig httpd on
/etc/init.d/httpd start
PHP Packages
As Horde is a PHP application, it requires that you have PHP installed. In addition to the base php package, Horde and its applications require several other PHP packages. The following installs the most commonly needed PHP packages.
yum install php php-xml php-imap php-devel
PEAR
The Fedora Core PHP package contains a PEAR installation, but it is missing some PEAR modules needed by Horde. You can install these modules using the following command:
pear install -f Net_IMAP Log Mail_Mime File Date Console_Getopt
Note for Fedora Core 5 you should also install the DB package for pear.
pear install -f DB
Read the note at: http://pear.php.net/bugs/bug.php?id=5113 If you've faced this problem then you can download a patched file via:
pear install http://www.iptp.net/files/File-1.2.1.tgz
SQL
While a SQL server is not required to run Horde, it is recommended as much of the Horde functionality will be lost without it. You may run either MySQL or PostgreSQL, but you should not run both!
While you do not need to run the SQL server on the same machine that runs the Horde web applications, that is the most common setup for small sites, and hence the following assumes this type of setup.
MySQL
yum install php-mysql mysql mysql-server
/sbin/chkconfig --levels 235 mysqld on
/etc/init.d/mysqld start
(You might need more packages depending your installation.)
OR
PostgreSQL
yum install postgresql-server php-pgsql postgresql-libs mod_auth_pgsql postgresql
/sbin/chkconfig --levels 235 postgresql
/etc/init.d/postgresql start
CVS
The instructions below install Horde and its applications from CVS. In order to use CVS, you will need to have the cvs package installed in your machine. The following command can be used to install the cvs package.
yum install cvs
Horde
The following commands can be used to install Horde along with the more popular Horde applications, using anonymous CVS. There are other ways to install Horde and its applications other than CVS. However, this documentation only covers using CVS for installation.
cd /var/www/html
cvs -d :pserver:cvsread@anoncvs.horde.org:/repository login
Password: horde
cvs -d :pserver:cvsread@anoncvs.horde.org:/repository checkout horde
cd horde
cvs -d :pserver:cvsread@anoncvs.horde.org:/repository checkout framework imp kronolith mnemo nag passwd turba ingo
cd framework
pear channel-discover pear.horde.org
php install-packages.php
mkdir -p /var/horde/vfs
chown -R apache:apache /var/horde
Configuration
Once all the software is installed, you need to configure it for use with Horde. Below is some information on how to configure the various software packages. Note that configuration will vary depending on your needs, and the following is just a basic guide; you may need to adjust your configuration for your needs.
MySQL
Before you can use the MySQL server with Horde, you must setup the SQL server and create the needed database tables. Create a MySQL account
First, you need to create a SQL user. In the instructions below, replace 'password' with the actual password you want to set for this account.
mysqladmin -u root password 'password'
mysqladmin -u root -h your.host.name password 'password'
Creating the MySQL Database and Tables
Next, you need to create the database and its tables. First, you must edit the database scripts Horde provides to set the database password to the password you set in the previous step.
cd /var/www/html/horde/scripts/sql
vi create.mysql.sql
Then change the database password in the file, and save it. Once you have set the password correctly in the script, you should run the script in order to create the database:
mysql -u root -p < create.mysql.sql
PostgreSQL
Before you can use the PostgreSQL server with Horde, you must setup the SQL server and create the needed database tables.
cd /var/www/html/horde/scripts/sql
vi pgsql_create.sql
Then change the database password in the file and save it. Once you have set the password correctly in the script, you should run the script in order to create the database:
psql -d template1 -f pgsql_create.sql -U postgres
psql -d horde -U horde -f auth.sql
psql -d horde -U horde -f category.sql
psql -d horde -U horde -f prefs.sql
Note that you may see some NOTICE messages from PostreSQL noting that implicit indexes have been created; these are normal and can be ignored.
Horde
First, you need to install the distribution default configuration files, present in the config subdirectory within each Horde application (including the base Horde configuration directory itself):
cd /var/www/html/horde
for a in . mnemo nag turba imp ingo kronolith passwd; do cd /var/www/html/horde/$a/config; for f in *.dist; do cp $f `basename $f .dist`; done; done
Next, we want to make sure that all the files have the correct file permissions:
cd /var/www/html
chown -R apache:apache horde
chmod -R o-rwx horde
Finally, you now need to do the basic configuration of all the Horde applications using the Horde Administrative Interface . Log in to your Horde installation, at http://your.host.name/horde/. Once you're in, click on the Administration link on the sidebar, then the Setup sub-option. The Default Administrator password is mailadmin. You should see a list of available Horde applications in the main frame - you now need to go through this list and configure each Horde application as you please. Click on an entry in this list; you should be brought to a configuration screen. Go through each tab within this screen (if there are multiple tabs; otherwise there will just be a single page) and change any settings as you see fit (although the default options are usually sufficient if you don't feel comfortable editing all the available variables). Once you have finished configuring an application, click on the Generate XXX Configuration button at the bottom of the page to auto-generate the relevant conf.php file for the specific application. Repeat this process for every application in the Setup page.
Note that the above only configures the base configuration of the applications. There are other configuration files which you may also want to configure for each application. Such configuration must be done by hand. See the docs/INSTALL file for each application for more information on configuring that application.
How to configure proxy for common linux apps
pear
to use a proxy with PEAR, you should use
$ pear config-set http_proxy http://proxypc.localdomain
yum
For yum to work you have to add these settings to /etc/yum.conf
export http_proxy=http://192.168.65.253:8080
export ftp_proxy=http://192.168.65.253:8080
wget
For wget to work add this to ~./bash_profile
export http_proxy=http://192.168.65.253:8080
export ftp_proxy=http://192.168.65.253:8080
then run command
source ~./bash_profile
How to add a disk to LVM
LVM
Quick Notes First:
Formatting the new Disk
Suppose the Disk is /dev/sdb, the second scsi disk,
fdisk /dev/sdb
create as many partitions as you need using command n
Label them with command t as 8e for making it Linux LVM
Write and Exit with the command w.
Format the partitions you require using mkfs command
mkfs -t ext3 -c /dev/sdb1
LVM commands
pvcreate /dev/sdb1
vgextend VolGroup00 /dev/sdb1
lvextend -L 15G /dev/VolGroup00/LogVol01 ;for extending LogVol to 15GB
lvextend -L+1G /dev/VolGroup00/LogVol01 ;for adding one more GB to Logical Volume LogVol01
ext2online /dev/VolGroup00/LogVol01 ;for resizing the Logical Volumes
Thats it finished
Extra Instructions
Creating Physical Volumes for LVM
Since LVM requires entire Physical Volumes to be assigned to Volume Groups, you must have a few empty partitions ready to be used by LVM. Install the OS on a few partitions and leave a bit of empty space. Use fdisk under Linux to create a number of empty partitions of equal size. You must mark them with fdisk as type 0xFE. We created five 256MB partitions, /dev/hda5 through /dev/hda9.
Registering Physical Volumes
The first thing necessary to get LVM running is to register the physical volumes with LVM. This is done with the pvcreate command. Simply run pvcreate /dev/hdxx for each hdxx device you created above. In our example, we ran pvcreate /dev/hda5 and so on.
Creating a Volume Group
Next, create a Volume Group. You can set certain parameters with this command, like physical extent size, but the defaults are probably fine. We'll call the new Volume Group vg01. Just type vgcreate vg01 /dev/hda5.
When this is done, take a look at the Volume Group with the vgdisplay command. Type vgdisplay -v vg01. Note that you can create up to 256 LVs, can add up to 256 PVs, and each LV can be up to 255.99GBs! More important, note the Free PE line. This tells you how many Physical Extents we have to work with when creating LVs. For a 256MB disk, this reads 63 because there is an unused remainder smaller than the 4MB PE size.
Creating a Logical Volume
Next, let's create a Logical Volume called lv01 in VG vg01. Again, there are some settings that may be changed when creating an LV, but the defaults work fine. The important choice to make is how many Logical Extents to allocate to this LV. We'll start with 4 for a total size of 16MB. Just type lvcreate -l4 -nlv01 vg01. You may also specify the size in MBs by using -L instead of -l, and LVM will round off the result to the nearest multiple of the LE size.
Take a look at your LV with the lvdisplay command by typing lvdisplay -v /dev/vg01/lv01. You can ignore the page of Logical extents for now, and page up to see the more interesting data.
Adding a disk to the Volume Group
Next, we'll add /dev/hda6 to the Volume Group. Just type vgextend vg01 /dev/hda6 and you're done! You can check this out by using vgdisplay -v vg01. Note that there are now a lot more PEs available!
Moving Creating a striped Logical Volume
Note that LVM created your whole Logical Volume on one Physical Volume within the Volume Group. You can also stripe an LV across two Physical Volumes with the -i flag in lvcreate. We'll create a new LV, lv02, striped across hda5 and hda6. Type lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5 /dev/hda6. Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command tells it to stripe it across the two.
You now have an LV striped across two PVs!
Moving data within a Volume Group
Up to now, PEs and LEs were pretty much interchangable. They are the same size and are mapped automatically by LVM. This does not have to be the case, though. In fact, you can move an entire LV from one PV to another, even while the disk is mounted and in use! This will impact your performance, but it can prove useful.
Let's move lv01 to hda6 from hda5. Type pvmove -n/dev/vg01/lv01 /dev/hda5 /dev/hda6. This will move all LEs used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to hda6. It takes a while, but when it's done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now resides entirely on /dev/hda6!
Removing a Logical Volume from a Volume Group
Let's say we no longer need lv02. We can remove it and place its PEs back in the empty pool for the Volume Group. First, unmounting its filesystem. Next, deactivate it with lvchange -a n /dev/vg01/lv02. Finally, delete it by typing lvremove /dev/vg01/lv02. Look at the Volume Group and notice that the PEs are now unused.
Removing a disk from the Volume Group
You can also remove a disk from a volume group. We aren't using hda5 anymore, so we can remove it from the Volume Group. Just type vgreduce vg01 /dev/hda5 and it's gone!
Installing SSLyze
SSLyze is a Python tool that can analyze the SSL configuration of a server by connecting to it. It is designed to be fast and comprehensive,...
-
LVM Quick Notes First: Formatting the new Disk Suppose the Disk is /dev/sdb, the second scsi disk, fdisk /dev/sdb create as ma...
-
MediaWiki Migration Old Server: mysqldump -u root -p wikidb > wikidb.sql tar -cvf wiki.tar wiki ;this is the wiki folder on document ...
-
The annual costs of fraud are on the rise. Leverage #DeepLearning for fraud detection with the help of @HPE_HPC https://t.co/jrpnIXzwjR p...