Tuesday, October 2, 2012

leave kerberos AFS tokens behind for root

To access the AFS home directory on a VM where we login as root and have no AFS tokens, we have to export the AFS tokens for the user whose directories we want to access. e.g. if test-user wants to access his AFS home as root from a VM, I must:

  1. Get kerberos tickets for my username
  2. Forward my tickets to the VM by logging in
  3. Note the tickets cache filename
  4. Logout of the VM
  5. Get non forwardable root tickets 
  6. Login to the VM as root
  7. Export the tickets cache "export KRB5CCNAME=/tmp/filename" from 3
  8. Get new AFS tokens with afslog
  9. Now the access to the AFS cell of test-user should be enabled

This also shows that your kerberos tickets can be misused if they are not destroyed before exit.



Thursday, August 9, 2012

Install MySQL and pymysql on CentOS 6

Install MySQL-server and MySQL client:

yum install mysql-server mysql

install the required connector (for python on our case):

yum install MySQL-python

We already had some scripts that used pymysql instead of MySQL-python driver
so we also installed pymysql using pip or easy_install. easy_install comes with python-setuptools:

yum install python-setuptools

I prefer pip, so:

easy_install pip

pip install pymysql

Make MySQL daemon to start automatically at login:

chkconfig --levels 235 mysqld on

Start mysqld manually for the first time:

service mysqld start

Login to mysql

mysql -u root

Set password for root

SET PASSWORD FOR 'root'@'localhost'=PASSWORD('some-password');
SET PASSWORD FOR 'root'@'localhost.localdomain'=PASSWORD('some-password);

SET PASSWORD FOR 'root'@'127.0.0.1'=PASSWORD('some-password');

Please note that localhost.localdomain should be replaced with FQDN of the host.

Drop the Any User:

DROP USER ''@'localhost';
DROP USER ''@'localhost.localdomain';

Create the required database:

CREATE DB MYTEST;

Create a new user with full rights on this db only:

CREATE USER 'myUser'@'localhost' IDENTIFIED BY 'some-password';
GRANT ALL PRIVILEGES ON MYTEST.*  TO myUser;
FLUSH PRIVILEGES;
(Note that myUser can only connect from localhost)
EXIT

It is probably best for security reasons to allow connections from localhost only. Add the following line to /etc/my.cnf under [mysqld] and [mysqld_safe]

skip-networking

P.S. make sure not to leave any orphan accounts having access to the db.

Friday, July 13, 2012

Install IPM on HPC Cluster

IPM is a very handy profiling tool to have. It gives a nice overview of the overall performance of a parallel program.
It uses sampling to measure the performance, and thus no changes are required to be made in the source code of the parallel program. There is almost no performance overhead as we are not inserting any code into the parallel program. However, it is not good at getting the nitty gritty details but nonetheless it comes out to be pretty handy and fast.

Installation:

Download the tar from sourceforge and untar in some directory on the cluster.
cd into the directory and configure with the command:

./configure --prefix=/path/for/install --with-compiler=INTEL --with-arch=X86 --with-cpu=OPTERON --with-OS=LINUX

We chose to have IPM as a dynamic library and thus

make shared

installs it in the requested directory.

Usage:

export LD_PRELOAD=/path/to/install/lib/libipm.so
mpirun your program

or if using a batch script, do the same in the script.


Monday, June 11, 2012

Plone Backup with Tivoli

We have now set up the web server backup on TSM. Plone is installed on a separate volume mounted on the VM. The Tivoli Client was installed on our VM and we had to make a couple of modifications to the dsm files.

/opt/tivoli/tsm/client/ba/bin/dsm.sys

specifies which server/port/address to connect to and the name of incl/excl file. It also specifies where to place the log files. We provided the server details and the name of incleclude file.

/opt/tivoli/tsm/client/ba/bin/dsm.opt

specifies the name of the server and the mount points. The Servername does not have to be defined in dsm.opt file but if it is, then it must match the servername in dsm.sys file.

The mount points are specified with the domain option. e.g.

domain /usr/local/Plone/

We can specify multiple domains separated by one or more spaces e.g.

domain /usr/local/Plone/ /home/myUsernmae/

Tivoli will backup all files under the specified domains. This may not be ideal as we might want to exclude some folders e.g. if we are backing up "/" we may not want to include "/tmp/" in our backup. 
We can specify the include-exclude list in dsm.opt file or we can create a separate file for the list and specify the path and filename in dsm.sys with inclexcl option.


Finally we set the daily cron job to run the incremental backup on tivoli as the following script:

#!/bin/sh
TIVOLIBIN=/opt/tivoli/tsm/client/ba/bin

48 5 * * * root $TIVOLIBIN/dsmc incr > /tmp/dsmc.log 2>&1 || cat /tmp/dsmc.log /var/log/dsmerror.log

Restoring Plone from backup

# dsmc
# dsmc> rest /path/to/Plone/* /path/to/restore/ -su=yes

We can omit the destination folder which will restore Plone to its original location and will overwrite everything that was previously there.

Friday, June 1, 2012

Something about Cron

(info valid on CentOS 5)
Cron is a daemon that executes commands/scripts automatically at a scheduled time. Cron is installed on most linux distributions by default and placed in the start-up scripts (On OS X, the preferred command is launchd). To check if cron is running try:

ps aux | grep crond

Cron jobs can be created by placing a script file in /etc/cron.* directories.

ls -l /etc | grep cron

lists the hourly, daily and all other directories.

We prefer the crontab either for simpler commands or if we want more flexibility with the scheduling. Otherwise placing the script in either of the /etc/cron.* directory works just fine.


An entry in the crontab file consists of 7 parts, each separated by a single space:

min   hour    day-of-month month day-of-week user command
0-59  0-23        0-31      0-12    0-6                 


 These directories are setup via the /etc/crontab file as shown by reading /etc/crontab file:


SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
#executes 'run-parts /etc/cron.hourly' as root at 01 minutes of every hour, every day, every month, every day of month.
02 4 * * * root run-parts /etc/cron.daily
 #executes 'run-pars /etc/cron.hourly' as root at 04:02, every day, every month, every day of month.
22 4 * * 0 root run-parts /etc/cron.weekly
#executes 'run-parts /etc/cron.hourly' as root at 04:22, every month, every sunday (0=sunday).
42 4 1 * * root run-parts /etc/cron.monthly

#executes 'run-pars /etc/cron.hourly' as root at 04:42, 1st of every month, irrespective of the day of month.

Placing a script in /etc/cron.daily will execute the script at time as shown above.
Now, we could use the crontab -e command to edit our crontab file and add a line in it. This gives us a finer grained control over scheduling of our job. For example if we want to run our script every half hour we could add a line such as:

*/30 * * * * * username /path/to/my/script.sh  #(*/30 = any min that is divisible by 30 i.e.30 and 60)

every 3 hours:

00 */3 * * * username /path/to/my/script.sh

every 3 days at midnight:

 00 00 */3 * * username /path/to/my/script.sh

every 2 hours between 1000hrs and 1600hrs

00 10-16/2 * * * username /path/to/my/script

Kerberos Generate Keytab File

Keytab is a file containing pairs of kerberos usernames and their respective keys which are generated from the user's passwords. It lets the user to generate tickets without having to type his password. It is particularly useful to allow scripts to get kerberos tickets without writing the password in plain text anywhere. Keytab files must be used very carefully because anybody who can read the file can use it to generate tickets.

Generate keytab file
ktutil -k name.keytab add -p username@realm.bla.bla -e arcfour-hmac-md5 -e des-cbc-md5 -e des-cbc-md4 -e des-cbc-crc -e des3-cbc-sha1 -e aes128-cts-hmac-sha1-96

To use the tickets in keytab file
kinit -kt name.keytab username@REALM.BLA

See the tickets in keytab file
ktutil -k name.keytab list

Keys inside a keytab file will become invalid if the user changes his password. 

Now Hatch that Egg

Easy Install:

Easy install is a python module bundled with setuptools, meant for easy download, installation and management of python packages and distributions.

To install a distribution, we only need to supply the name or URL of the python distribution to easy_install


>>> easy_install mySQLdb

or we could install an egg from a directory on our computer as:

>>> easy_install ~/Downloads/chicken-0.1-py2.7.egg

The same could be used for an sdist inside a tarball as:


>>> easy_install ~/Downloads/mypackage-0.1.tar.gz


The packages are installed in python's site-package directory by default. Although the directory can be specified by the --install-dir.
Similarly, any scripts inside the installed distribution are placed in python default scripts folder unless instructed otherwise by the --script-dir option.

Any packages installed with easy_install are added to the easy_install.pth file in the install directory.

Upgrading a package:

Packages can be upgraded easily with easy_install



>>> easy_install "somePackage==2.1"
or
>>> easy_install "somePackage>2.1"
or we could just add the upgrade option to upgrade to the latest version as:
>>> easy_install --upgrade somePackage

Uninstalling an Egg/Package:

Removing the egg or package is as easy as deleting the egg file/directories and any matching scripts.
However, it is best to remove the package's line from easy_install.pth file. This can be done manually or by running:


>>> easy_install -m packageToRemove


This makes sure that python will not search for package anymore.










Laying the Egg

Modules:

To understand about eggs lets start with modules first. Dont panic! A python module is simply a python file with python code. It is all the same as a python program but the only difference being the fact that programs are designed to run, whereas modules consist of code that is written with reusability in mind i.e. module are mainly designed to be imported and used by other programs.

#file echo.py

def sayIt(arg_str):
    print("you said: " + arg_str)
Enter Python interpreter and execute:

>>> from echo import sayIt
>>> sayIt ("I said something")    
Python files can also be run as a script such as

>>> python echo.py "I said something"
However to accomplish this we need to add

if __name__ == "__main__":
    import sys
    sayIt(sys.argv[1])
Thus the code can be run both as a script and imported as a module file. The code that parses the command line is executed only when the module is executed as the main file. Running the module 
as a script is usually very convenient.


When importing a module, the interpreter first tries to search in built-in modules. If not found, the search continues in directories defined by the variable sys.path which usually include:
  • The directory that contains the program (usually current directory)
  • PYTHONPATH env variable (if set)
  • Installation dependant defaults (set at install time)

Packages: 

Module files are organised into packages. A package is a collection of modules inside a directory containing __init__.py file. The __init__.py tells the python interpretor to treat the directory as a python package. 


For example myPackage could look something like:



MyPackage/
    __init__.py
    module1.py
    module2.py


Similarly MyPackage.subPackage would look like



MyPackage/
    __init__.py
    module1.py
    module2.py
    subPackage/
        __init__.py
        module3.py
        module4.py

Packages can be imported in many ways depending on what functionality is exactly required from the package.

We can either:
  • import package.module or import package.subpackage.module
    • usage: package.subpackage.module.method()
  • from package.subpackage import module
    • usage: module.method()
  • from package.subpackage.module import method
    • usage: method()

__init__.py:

In the simplest cases, __init__.py file can be an empty file. However, we can provide some code e.g. sometimes it is convenient to load all the modules inside a package (from MyPackage import * ) and thus we must specify all the included modules as:



__all__=['module1','module2', 'moduleN']



Since __init__.py is the first file to be loaded in a module it may contain initialisation code for the package.


It is best to place python packages in directories that are on sys.path. Thus they can be easily found and imported whenever required. 

Distribute

Now we prepare to distribute! 

We can either create an sdist (source distribution) using distutils.core or an egg (binary distribution) using setuptools. More on that later

Source Distribution:

In the simplest of cases, the distribution would contain a setup.py file and the package folder such as:

MyDistribution/
    setup.py
    README.txt (optional)
    mydistribution/
        module1.py
        module2.py
The setup.py file contains metadata about the project e.g.
from distutils.core import setup

setup(
    name='MyDistribution',
    version='0.1dev',
    packages=['mydistribution',],
    license='Creative Commons Attribution-Noncommercial-Share Alike license',
    long_description=open('README.txt').read(),
)
Now to create a release we need to run the command
>>> python setup.py sdist
This will create a dist sub-directory in the project directory containing the distribution as a compressed archive e.g. MyDistribution-0.1.tar.gz

Note that the distribution does not include all files in the directory :-) more on that later.
All the files included in the distribution are reflected in a MANIFEST file which is also create by sdist.

The distribution is now ready to be registered anywhere. 

Binary Distribution (Egg):

Python eggs are similar to jar files in java or rpms in linux.
To create a binary distribution, the setup.py file is changed to:
from setuptools import setup

setup(
    name='MyDistribution',
    version='0.1dev',
    packages=['mydistribution',],
    license='Creative Commons Attribution-Noncommercial-Share Alike license',
    long_description=open('README.txt').read(),
)
To create a release we need to run the command
>>> python setup.py bdist_egg

The dist subdirectory now contains the newly laid egg MyDistribution-0.1-py-x.x.egg (where x.x is the python version used to create the egg)

Next we register the egg :-)




ploneformgen show/hide fields based on selection field

We wanted to show/hide some fields in our multistage form created in Plone via ploneformgen. It is a neat trick with jQuery and here is how we did it:

  • Open ZMI/yourSite
  • Navigate to the form folder
  • Add new Python script 
  • Click Add and edit
  • Paste the code

return \
"""
<script type="text/javascript">
$(document).ready(function() {
   $('#archetypes-fieldname-div-to-hide').hide();
$("#selection-field").change(function()
{
    if ($(this).val() == "I like this")
        $('#archetypes-fieldname-but_why').slideDown();
    else
        $('#archetypes-fieldname-but_why').slideUp();
    });
});
</script>
"""

Now open the form->edit->overrides and in the Header Injections Field enter the id of the python script as:
here/scriptID

works like a charm :)

External Methods in Plone 4

In Plone 4, /path/to/Plone/instance/Extensions is not created any more. To add an external method:


Create a folder and call it Extensions anywhere (we chose to create it in the buildout directory)
Use the "zope-conf-additional" parameter in plone.recipe.zope2instance to add another location for the Extensions folder


open base.cfg
search for [instance]
add line:

zope-conf-additional = extensions ${buildout:directory}/Extensions

This will make the folder /usr/local/Plone/zeocluster/Extensions available.
Now we can place any scripts files here and register them via ZMI