Mysql get size of tables in a database

To find the detail of size of tables in a database use this query in mysql console:

SELECT table_name AS “Tables”,

round(((data_length + index_length) / 1024 / 1024), 2) “Size in MB”

FROM information_schema.TABLES

WHERE table_schema = “$DB_NAME”

ORDER BY (data_length + index_length) DESC;

E.g. for database name mydatabase :

SELECT table_name AS “Tables”,

round(((data_length + index_length) / 1024 / 1024), 2) “Size in MB”

FROM information_schema.TABLES

WHERE table_schema = “mydatabase”

ORDER BY (data_length + index_length) DESC;

Nginx Proxy requests are redirecting to upstream name.

If you are using nginx as a reverse proxy and having the redirection issues to upstream name when proxy request results in 301,302 redirects, e.g. is proxied by backend upstream i.e.

upstream backend {

location / {
proxy_pass http://backend;

and the requests are getting ended eith http://backend/new.html, than you need to sepecify the proxy_headerr host in proxy requests, which is :

proxy_set_header Host $http_host;

so finally it becomes:

location / {
proxy_set_header Host $http_host;
proxy_pass http://backend;

Now all you requests and 302,301 redirects will consist valid domain name(host).

How to add indexer in a Custom Module?

Follow these steps:











Class Module_Model_CmsIndexer extends Mage_Index_Model_Indexer_Abstract


public function getName(){

return ‘Add CMS Cache’;


public function getDescription(){

return ‘Rebuild Cache Index for all CMS Pages’;


protected function _processEvent(Mage_Index_Model_Event $event){

// process


public function reindexAll(){

//Your Function to be executed



What is configure, make, make install ?

There are lot of confusions for usual configure, make, make install sequence to get a program running in Linux. Unfortunately, most developers today have never used a compiler to install an application and used the GUI application builders.

The make utility is designed to decrease a programmer’s need to remember line of commands. I guess that is actually the good way of saying, it decreases a programmer’s need to document. In any case, the idea is that if you establish a set of rules to create a program in a format make understands, you don’t have to remember them again. The make utility has a set of built-in rules so you only need to tell it what new things it needs to know to build your particular utility. For example, if you typed in make program, make would first look for some new rules from you. If you didn’t supply it any then it would look at its built-in rules. One of those built-in rules tells make that it can run the linker (ld) on a program name ending in .o to produce the executable program.

So, make would look for a file named program.o. But, it wouldn’t stop there. Even if it found the .o file, it has some other rules that tell it to make sure the .o file is up to date. In other words, newer than the source program. The most common source program on Linux systems is written in C and its file name ends in .c.

If make finds the .c file (program.c in our example) as well as the .o file, it would check their timestamps to make sure the .o was newer. If it was not newer or did not exist, it would use another built-in rule to build a new .o from the .c (using the C compiler). This same type of situation exists for other programming languages. The end result, in any case, is that when make is done, assuming it can find the right pieces, the executable program will be built and up to date.

The old UNIX joke, by the way, is what early versions of make said when it could not find the necessary files. In the example above, if there was no program.o, program.c or any other source format, the program would have said:
make: don’t know how to make program. Stop.

Getting back to the task at hand, the default file for additional rules in Makefile in the current directory. If you have some source files for a program and there is a Makefile file there, take a look. It is just text. The lines that have a word followed by a colon are targets. That is, these are words you can type following the make command name to do various things. If you just type make with no target, the first target will be executed.

What you will likely see at the beginning of most Makefile files are what look like some assignment statements. That is, lines with a couple of fields with an equal sign between them. Surprise, that is what they are. They set internal variables in make. Common things to set are the location of the C compiler (yes, there is a default), version numbers of the program and such.

This now beings up back to configure. On different systems, the C compiler might be in a different place, you might be using ZSH instead of BASH as your shell, the program might need to know your host name, it might use a dbm library and need to know if the system had gdbm or ndbm and a whole bunch of other things. You used to do this configuring by editing Makefile. Another pain for the programmer and it also meant that any time you wanted to install software on a new system you needed to do a complete inventory of what was where.

As more and more software became available and more and more POSIX-compliant platforms appeared, this got harder and harder. This is where configure comes in. It is a shell script (generally written by GNU Autoconf) that goes up and looks for software and even tries various things to see what works. It then takes its instructions from and builds Makefile (and possibly some other files) that work on the current system.

Background work done, let me put the pieces together.

You run configure (you usually have to type ./configure as most people don’t have the current directory in their search path). This builds a new Makefile.
Type make This builds the program. That is, make would be executed, it would look for the first target in Makefile and do what the instructions said. The expected end result would be to build an executable program.
Now, as root, type make install. This again invokes make, make finds the target install in Makefile and files the directions to install the program.


Set timezone of Mysql Server

Many times timezone issues with your database server causes wrong information to be entered in reports, transaction tables etc.

To fix timezone issues with your mysql server, just login to your mysql and execute following query:


E.g. SET GLOBAL time_zone = ‘+5:30’;

If you have to get the  current timezone, use this- SELECT TIMEDIFF(NOW(), UTC_TIMESTAMP);

Nginx show/set real ip address(public) from Amazon ELB Load Balancer address

To make it work, we need to add following lines to nginx configuration file:

real_ip_header X-Forwarded-For;

It makes nginx to trust an X-Forwarded-For header from anyone, 
which makes Real Ip visible on logs and can be used in other programs 
where real ip is required.

Nginx allow cross domain ajax requests

To enable Nginx for Cross Origin calls add this in nginx configuration for the domain  :

location /{

add_header ‘Access-Control-Allow-Origin’ ‘’;
add_header ‘Access-Control-Allow-Credentials’ ‘true’;
add_header ‘Access-Control-Allow-Methods’ ‘GET’;


That’s it, it will enable GET requests from to the configured domain.

Automate Magento Image Cache – Don’t let your website down when catalog image cache is flushed!

Clear Catalog Image Cache

No Data Received error after clicking on  Flush Catalog Image Cache

Problem – How to remove Magento Image Cache selectively, when you need to replace existing images and Clicking “Flush Catalog Image Cache” on Magento Cache Management will empty the cache folder which consumes lot of cpu plus memory, and for replacing few images Magento will generate all product images (say for replacing 2 product images it will remove all your 3000 product catalog image cache).

Goal – To Develop an automated solution for removing catalog image cache selectively.

What actually removing Magento catalog image cache does is. it empties the media/catalog/product/cache folder.

Directory where it executes the operation : <Magento Root Directory>/media/catalog/product/cache

Let’s first install a program called ‘incron‘, which is used to monitor modified directories and files and response to the File Events.

– Red Hat / Centos : sudo yum install incron

– Ubuntu / Debian based flavours : sudo apt-get install incron

Turn Service on :

# service incrond start

You also need a script to search / remove the SKU images replaced:

let’s assume you are in /root directory,

# vim

Add Following content on it:

find /<Your Magento Directory>/media/catalog/product/cache/ -name “$1*” -exec rm -f {} \;

save and close it.

Now type the following command to edit your incrontab:

# incrontab -e

<Your Magento Root Directory>/media/catalog/product/cache/<Directory for your Catalog Image (SKU)> IN_CLOSE_WRITE /root/ $#

Example(For SKU’s like KP00234.KP00235):

/var/www/html/media/catalog/product/cache/K/P IN_CLOSE_WRITE /root/ $#

Similarly, for multiple SKU ranges you need to add incrontab rules.
You can check the logs using :

tail -f /var/log/syslog (Centos)

tail -f /var/log/messages (Ubuntu)

Now, you don’t need to worry about cleaning your Catalog Image Cache manually.


Magento Inventory Management – Desktop Tool

Download Magento Desktop Tool allows you to easily manage inventory on your Magento Store.

Magento Inventory Manager


This application allows Store Owners to update product stock on their website. It also allows user to setup a  Gmail(Google) SMTP A/c so that they will get a copy of uploaded stock csv file, which allows them to keep record of stock updation over their store.

Download here.


– Magennto API User.

– Gmail SMTP Settings (Optional).

– Right now it supports CSV with 400 SKU’s maximum at a time.


– Magento Product Inventory Update.

– File Upload Limit

– SMTP Settings for Email Alerts

– Easy to Configure Backend – Secure.

– Stock File Validation Rule( First Line of product stock csv) : sku,qty,is_in_stock



Remove Double Extensions in Linux


Following script will help you to remove double extensions which can be added by some programs or by user mistakes and need to be corrected in bulk.

Here is the solution, You have to perform this in the current working directory (non-recursively).


Content of

echo "Existing Extension" $1
echo "Converted Extension" $2
for file in *$1
mv "${file}" "${file%$2}"
echo " -- Fixing ${file%.$2}"

chmod +x

Now run this script as:

sh <double_wrong_extension> <correct_extension>

for example,

For double extension .jpg.jpg do this:

sh .jpg.jpg .jpg


Download this script :

Check Application Memory usage using simple command

To check the total amount of memory used by any program’s child processes:

Create a shell script
# vim


echo “————————————-Memory Checkup————————————“;

ps -ylC $1 –sort:rss | awk ‘!/RSS/ { s+=$8 } END { printf “%s\n”, “Total memory used by child processes: “; printf “%dM\n”, s/1024 }’

Run Script with a argument whose child processed needs to be calculated:

sh nginx

————————————-Memory Checkup————————————
Total memory used by child processes:


Redis : Delete Keys by checking/matching expiry time remaining.


Sometimes we need to remove specific group of keys which are created earlier or have a specific expiry time remaining. You can simply delete Redis Key-value pairs by checking their expiry time remaining and running this simple script on your console:

redis-cli -h <hostname> keys  “*” | while read LINE ; do TTL=`redis-cli -h <hostname> ttl $LINE`; if [ $TTL -ge  <expiry-time-need-to-check> ]; then echo “del $LINE”; RES=`redis-cli -h <hostname>  del $LINE`; fi; done;


# redis-cli -h keys  “*” | while read LINE ; do TTL=`redis-cli -h ttl $LINE`; if [ $TTL -ge  40000 ]; then echo “del $LINE”; RES=`redis-cli -h del $LINE`; fi; done;

You can also modify/use this script to delete PERSIST Keys(Non-Expiring Keys), which never get removed automatically.

# redis-cli -h keys  “*” | while read LINE ; do TTL=`redis-cli -h ttl $LINE`; if [ $TTL -eq  -1 ]; then echo “del $LINE”; RES=`redis-cli -h del $LINE`; fi; done;


PhantomJs: Convert a String to an Image/PDF.

To convert a string or HTML to PNG or PDF using phantomjs:

Create a File say texttoimage.js

Add this code:

var page = require(‘webpage’).create();
page.viewportSize = { width: 200, height : 200 };
page.content =  “This Text will be converted to Image”;
window.setTimeout(function () {
}, 1000);

Save and Run the Script from console: phantomjs texttoimage.js

A image named newimage.png will be generated on same directory where your script is saved.

Here is the output:

PhantomJS : Converted Text to Image

Add an External JS in Magento

To Add an External JS without any problem use this in XML Layout:

[box type=”info”]<reference name="head"><block type="core/text" name="google.cdn.jquery">
<action method="setText"><text><![CDATA[<script type="text/javascript" src=""></script><script type="text/javascript">jQuery.noConflict();</script>]]></text></action></block></reference>

The External JS will be added to the <head> tag.

Magento : Front controller reached 100 router match iterations

Magento Ecommerce In Magento basic urls look like this ‘{name-of-the-module}/{controller}/{action}’ , If the url looks like this ‘{name-of-the-module}/{controller}/’ it’s equivalent to ‘{name-of-the-module}/{controller}/index’ , If the url looks like this ‘{name-of-the-module}/’ it’s equivalent to ‘{name-of-the-module}/index/index’.

For categories and products there is a routing table (core_url_rewrite) in which are saved different rewrite rules in order to have an url like this: /product.html instead of catalog/product/view/id/7869.

This is what happens, internally: you call an url ‘demo-url.html’. Magento looks in the rewrites table for a record with the ‘request_path’ equal to ‘demo-url.html’ in the current store view.  If it’s not found then it searches for a cms page with this identifier and active on the current store view.
If that is not found Magento tries to match it to an url of this form ‘{name-of-the-module}/{controller}/{action}’
if this doesn’t work then you get an error If it finds such a record then if the ‘options’ value is ‘RP’, this means it’s a redirect and you get redirected to the url listed in the ‘target_path’ field else it starts the process all over again with the url from ‘target_path’.
There is a limit of 100 tries. I mean if after 100 tries Magento still finds a record matching the ‘request_path’ you get an internal error ‘Front controller reached 100 router match iterations’

Now for the other part of the url rewrites: how are they saved?

Each time you save a product/category (actually when you save the url key attribute of a product/category) an url rewrite is created. For more details on who this works see Mage_Catalog_Model_Product_Attribute_Backend_Urlkey::beforeSave(); – here the url key is prepared (stripped of illegal characters) The actual saving of the rewrite is done here: Mage_Catalog_Model_Url::refreshProductRewrite() or refreshCategoryRewrite()
What these methods do is to insert records in the core_url_rewrite table with the product/category url_key as ‘request_path’ and ‘catalog/product/view/id/7869’ or ‘catalog/category/view/id/9’ in order to be matched later by the above algorithm.

Zend_Db_Statement_Exception with message ‘SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry’

Exception ‘Zend_Db_Statement_Exception’ with message ‘SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry” while Catalog Url Indexing


Next exception ‘Zend_Db_Statement_Exception’ with message ‘SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry ‘xxxx/xxx/xxx-‘ for key ‘UNQ_CORE_URL_REWRITE_REQUEST_PATH_STORE_ID” in /data/apache/beta/lib/Zend/Db/Statement/Pdo.php:234
Stack trace:


– Find out the request path on Url Rewrites Data, there must be more than single entries for that request path.

– Out of the list  find out the entry whose request_path matches the target_path of any entry in the list.

– Open and Disable the Url Redirection( Set it to No).

– Reindex Catlog Url Indexes, Done.(php -f indexer.php — -reindex catalog_url)

Magento Show Out of Stock Configurable Options – Magento 1.5+

To show Out of Stock options in configurable products, we need to make these changes:

Create a Local Module : Mindgeek_Outofstock

Rewrite /app/code/core/Mage/Catalog/Product/Block/Product/View/Type/Configurable.php


In /app/code/core/Mage/Catalog/Product/Block/Product/View/Type/Configurable.php

class Mindgeek_Outofstock_Block_Product_View_Type_Configurable  extends Mage_Catalog_Block_Product_View_Type_Configurable
//Rewrite getAllowProducts() function:
public function getAllowProducts()
if (!$this->hasAllowProducts()) {
$products = array();
$allProducts = $this->getProduct()->getTypeInstance(true)
->getUsedProducts(null, $this->getProduct());
foreach ($allProducts as $product) {
$products[] = $product;
return $this->getData(‘allow_products’);

Now changes required in design template, view.phtml


<?php if ($_product->isSaleable() && $this->hasOptions()):?>
<?php echo $this->getChildChildHtml(‘container2’, ”, true, true) ?>
<?php endif;?>

<?php if ($this->hasOptions()):?>
<?php echo $this->getChildChildHtml(‘container2’, ”, true, true) ?>
<?php endif;?>

Enable this Module from /app/etc/modules/Mindgeek_Outofstock.xml

<?xml version=”1.0″?>


Disable Magento Visitor Logs – For Better Performance

Disabling Magento Visitor Logs can’t possible from Admin. Disabling Visitor Logs helps in store performance as well as reduces sql queries while surfing the store which decreases MySQL Server load. It also causes Integrity Constraint Error sometimes.

So to disable Visitor logs you need rewrite a Core Model:

/app/code/core/Mage/Log/Model/Visitor.php – Mage_Log_Model_Visitor

Find the below coding in the visitor.php .
protected $_skipRequestLogging = false;

Replace the coding as below and save.
protected $_skipRequestLogging = true;

Save the File and Check your Visitor Log.

[box type=”info”] If you have a high traffic website it is suggested to disable Visitor logs to control the  increasing Database Size.[/box]

Amazon SES Request Expired, it must be within 300secs/of server time.

While using Amazon Simple Email Service, sometimes we encounter this error message:

” AWS Error Code: RequestExpired, AWS Error Message: Request timestamp: Sun, 13 Mar 2011 06:10:27 GMT expired. It must be within 300 secs/ of server time.”

This error probably means that Amazon SES time differs from your system clock time. If you’re using the command-line tools then you may need to adjust your system clock to match Amazon SES’s clock, which is synchronized with UTC.

If you’re using a linux system try this:

[box type=”shadow”] $ sudo /usr/sbin/ntpdate[/box]



Now you can easily send emails using SES.