How To Install Grub Customizer 4.0.6 On Debian 8.0 Jessie

grub2Hello Linux Geeksters. As you may know, Grub Customizer is a nice application, that allows the user to edit the menu entries from the GRUB booting screen, set the default operating system, change the installed OS via a livecd.

The latest version available is Grub Customizer 4.0.6, which has has been released a while ago, bringing fixes.

In this article I will show you how to install Grub Customizer 4.0.6 on Debian 8.0 Jessie and derivative systems.

Because it is not available via repository, we have to download the deb packages from launchpad and install the via command-line. I prefer gdebi over dpkg due to the fact that it also handles dependencies.

Follow the instructions for your system’s architecture exactly, in order to get a successful installation.

How to install Grub Customizer 4.0.6 on 32 bit Debian 8.0 systems:

$ sudo apt-get install gdebi
$ wget "https://launchpad.net/~danielrichter2007/+archive/ubuntu/grub-customizer/+files/grub-customizer_4.0.6-0ubuntu1%7Eppa1p_i386.deb" -O grub-customizer_4.0.6_i386.deb
$ sudo gdebi grub-customizer_4.0.6_i386.deb

How to install Grub Customizer 4.0.6 on 64 bit Debian 8.0 systems:

$ sudo apt-get install gdebi
$ wget "https://launchpad.net/~danielrichter2007/+archive/ubuntu/grub-customizer/+files/grub-customizer_4.0.6-0ubuntu1%7Eppa1p_amd64.deb" -O grub-customizer_4.0.6_amd64.deb
$ sudo gdebi grub-customizer_4.0.6_amd64.deb

Optional, to remove grub-customizer, do:

$ sudo apt-get remove grub-customizer

 

Simplifying your Django Frontend Tasks with Grunt

grunt-logoGrunt is a powerful task runner with an amazing assortment of plugins. It’s not limited to the frontend, but there are many frontend-oriented plugins that you can take advantage of to combine and minify your static media, compile sass and less files, watch for changes during development and reload your browser automatically, and much more.

In the last several years, the amount of tooling around frontend development has expanded dramatically. Frameworks, libraries, preprocessors and postprocessors, transpilers, template languages, module systems, and more! Wiring everything together has become a significant challenge, and a variety of build tools have emerged to help ease this burden. Grunt is the current leader because of its fantastic plugin community, and it contains a wide array of plugins that can be very valuable to a Django developer. Today I’m going to talk about an easy way to integrate Grunt with Django’s runserver, and highlight a few plugins to handle common frontend tasks that Django developers often deal with.

Installing Grunt

Grunt uses Node.js, so you’ll need to have that installed and configured on your system. This process will vary depending on your platform, but once it’s done you’ll need to install Grunt. From the documentation:

$ npm install -g grunt-cli

 

This will put the grunt command in your system path, allowing it to be run from any directory.

Note that installing grunt-cli does not install the Grunt task runner! The job of the Grunt CLI is simple: run the version of Grunt which has been installed next to a Gruntfile. This allows multiple versions of Grunt to be installed on the same machine simultaneously.

Next, you’ll want to install the Grunt task runner locally, along with a few plugins that I’ll demonstrate:

$ npm install --save-dev grunt grunt-contrib-concat grunt-contrib-uglify grunt-sass grunt-contrib-less grunt-contrib-watch

 

Managing Grunt with runserver

There are a few different ways to get Grunt running alongside Django on your local development environment. The method I’ll focus on here is by extending the runserver command. To do this, create a gruntserver command inside one of your project’s apps. I commonly have a «core» app that I use for things like this. Create the «management/command» folders in your «myproject/apps/core/» directory (adjusting that path to your own preferred structure), and make sure to drop an __init__.py in both of them. Then create a «gruntserver.py» inside «command» to extend the built-in.

In your new «gruntserver.py» extend the built-in and override a few methods so that you can automatically manage the Grunt process:

import os
import subprocess
import atexit
import signal

from django.conf import settings
from django.contrib.staticfiles.management.commands.runserver import Command\
    as StaticfilesRunserverCommand


class Command(StaticfilesRunserverCommand):

    def inner_run(self, *args, **options):
        self.start_grunt()
        return super(Command, self).inner_run(*args, **options)

    def start_grunt(self):
        self.stdout.write('>>> Starting grunt')
        self.grunt_process = subprocess.Popen(
            ['grunt --gruntfile={0}/Gruntfile.js --base=.'.format(settings.PROJECT_PATH)],
            shell=True,
            stdin=subprocess.PIPE,
            stdout=self.stdout,
            stderr=self.stderr,
        )

        self.stdout.write('>>> Grunt process on pid {0}'.format(self.grunt_process.pid))

        def kill_grunt_process(pid):
            self.stdout.write('>>> Closing grunt process')
            os.kill(pid, signal.SIGTERM)

        atexit.register(kill_grunt_process, self.grunt_process.pid)

 

A barebones grunt config

To get started with Grunt, you’ll need a barebones «Gruntfile.js» at the root of your project to serve as your config.

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),

    // Task configuration goes here.

  });

  // Load plugins here.
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-sass');
  grunt.loadNpmTasks('grunt-contrib-less');
  grunt.loadNpmTasks('grunt-contrib-watch');

  // Register tasks here.
  grunt.registerTask('default', []);

};

 

Combining static media

A common task for the frontend, and one that we often use complex apps for in Django, is combining and minifying static media. This can all be handled by Grunt if you like, avoiding difficulties sometimes encountered when using an integrated Django app.

To combine files, use the concat plugin. Add some configuration to the «grunt.initConfig» call, using the name of the task as the key for the configuration data:

grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),

    // Task configuration goes here.

    concat: {
      app: {
        src: ['myproject/static/js/app/**/*.js'],
        dest: 'build/static/js/app.js'
      },
      vendor: {
        src: ['myproject/static/js/vendor/**/*.js'],
        dest: 'build/static/js/lib.js'
      }
    }
  });

 

This will combine all Javascript files under «myproject/static/app/js» into one file called «myproject/build/static/js/app.js». It will also combine all Javascript files under «myproject/static/vendor» into one file called «myproject/build/static/js/lib.js». You’ll likely want to refine this quite a bit to pick up only the files you want, and possibly build different bundles for different sections of your site. This will also work for CSS or any other type of file, though you may be using a preprocessor to combine your CSS and won’t need this.

You’ll probably want to use this along with the «watch» plugin for local development, but you’ll use the «uglify» plugin for deployment.

Minifying static media

Once your app is ready for production, you can use Grunt to minify the JavaScript with the uglify plugin. As with concatenation, minification of your CSS will likely be handled by your preprocessor.

This task should be run as part of your deploy process, or part of a pre-deploy build process. The uglify config will probably be very similar to your concat config:

uglify: {
    app: {
      files: {'build/static/js/app.min.js': ['myproject/static/js/app/**/*.js']}
    },
    vendor: {
      files: {'build/static/js/lib.min.js': ['myproject/static/js/vendor/**/*.js']}
    }
  }

 

The main difference is that uglify takes the new-style «files» option instead of the classic «src» and «dest» options that concat uses.

Compiling Sass

You can compile Sass with Compass using the compass plugin, but I prefer to use the speedier sass plugin that uses libsass. Here’s an example that includes the Foundation library:

sass: {
      dev: {
        options: {
          includePaths: ['bower_components/foundation/scss']
        },
        files: {
          'build/static/css/screen.css': 'myproject/static/scss/screen.scss'
        }
      },
      deploy: {
        options: {
          includePaths: ['bower_components/foundation/scss'],
          outputStyle: 'compressed'
        },
        files: {
          'build/static/css/screen.min.css': 'myproject/static/scss/screen.scss'
        }
      }
    },

 

Compiling Less

Less is compiled using the less plugin.

less: {
            dev: {
                options: {
                    paths: ['myproject/static/less']
                },
                files: {
                    'build/static/css/screen.css': 'myproject/static/less/screen.less'
                }
            }
            deploy: {
                options: {
                    paths: ['myproject/static/less'],
                    compress: true
                },
                files: {
                    'build/static/css/screen.min.css': 'myproject/static/less/screen.less'
                }
            }
        },

 

Watching for changes and live reloading

Now that you’ve got your initial operations configured, you can use the watch plugin to watch for changes and keep the files up to date. It also will send livereload signals, which you can use to automatically refresh your browser window.

watch: {
            options: {livereload: true}
            javascript: {
                files: ['myproject/static/js/app/**/*.js'],
                tasks: ['concat']
            },
            sass: {
                files: 'myproject/static/scss/**/*.scss',
                tasks: ['sass:dev']
            }
        }

 

Note the way the task is specified in the «sass» watch config. Calling «sass:dev» instructs it to use the «dev» config block from the «sass» task. Using «sass» by itself as the name of the task would have invoked both «sass:dev» and «sass:deploy» from our configuration above.

Also note how we’re using a top-level «options» definition here to make livereload the default. You can then override that for an individual watch definition if you don’t need livereload for that one.

In order for the browser to make use of the livereload signals, we’ll need to add a <script> tag that retrieves code from the livereload server that Grunt starts in the background. In Django, you’ll want to hide this tag behind a DEBUG check.

{% if debug %}
    <script src="//localhost:35729/livereload.js"></script>
{% endif %}

 

You can also use a LiveReload browser extension instead.

More to come

Grunt is a fantastic tool and one that makes it easier to work with the growing set of frontend tools that are emerging. There’s a vibrant plugin ecosystem, and its capabilities are growing all the time. I’ll be covering more of those tools in the future, and I’ll be sure to include Grunt configuration for each one. Enjoy!

Brandon Konkle

CGroups: Лимит на разделение ресурсов, а не на ограничение

Linux_kernel_unified_hierarchy_cgroups_and_systemd.svg

В отличие от CentOS у Debian 7 Wheezy фактически нет нормального мейнтенера подсистемы cgroups, поэтому обойтись установкой пары пакетов и правкой 1-2х конфигов не получится.

apt-get install -y cgroup-bin libcgroup1

 

Копируем примеры конфигов:

cp /usr/share/doc/cgroup-bin/examples/cgconfig.conf /etc/cgconfig.conf
cp /usr/share/doc/cgroup-bin/examples/cgconfig.sysconfig /etc/default/cgconfig
zcat /usr/share/doc/cgroup-bin/examples/cgconfig.gz > /etc/init.d/cgconfig
chmod +x /etc/init.d/cgconfig

 

Создаем спец-папку (без нее поймаем ошибку «touch: cannot touch /var/lock/subsys/cgconfig: No such file or directory [FAIL] Failed to touch /var/lock/subsys/cgconfig … failed!»):

mkdir -p /var/lock/subsys

 

Теперь добавляем cgroup для тестов, пусть это будет cpuacct, так как она не приводит к деградациям и ее можно использовать безвредно и на боевом ПО.

Добавляем в конфиг: /etc/cgconfig.conf следующие строки:

mount {
cpuacct = /mnt/cgroups/cpuacct;
}
group wwwdata {
cpuacct {
}
}

 

Теперь попробуем стартануть его:

/etc/init.d/cgconfig restart

 

Убеждаемся, что cgroup была корректно смонтирована:

cat /proc/mounts |grep cgroup
cgroup /mnt/cgroups/cpuacct cgroup rw,relatime,cpuacct 0 0

 

А сама папка выглядит примерно так:

ls -al /mnt/cgroups/cpuacct/
total 4.0K
drwxr-xr-x. 3 root root 0 Feb 5 14:04 .drwxr-xr-x. 3 root root 4.0K Feb 5 14:04 ..-rw-r--r--. 1 root root 0 Feb 5 14:04 cgroup.clone_children
--w--w--w-. 1 root root 0 Feb 5 14:04 cgroup.event_control
-rw-r--r--. 1 root root 0 Feb 5 14:04 cgroup.procs
-r--r--r--. 1 root root 0 Feb 5 14:04 cpuacct.stat
-rw-r--r--. 1 root root 0 Feb 5 14:04 cpuacct.usage
-r--r--r--. 1 root root 0 Feb 5 14:04 cpuacct.usage_percpu
-rw-r--r--. 1 root root 0 Feb 5 14:04 notify_on_release
-rw-r--r--. 1 root root 0 Feb 5 14:04 release_agent
-rw-r--r--. 1 root root 0 Feb 5 14:04 tasks
drwxr-xr-x. 2 root root 0 Feb 5 14:04 wwwdata

 

Теперь нам нужно добиться того, чтобы процессы конкретного пользователя загонялись в определенную cgroup.

Снова копируем конфиги:

cp /usr/share/doc/cgroup-bin/examples/cgrules.conf /etc/cgrules.conf
cp /usr/share/doc/cgroup-bin/examples/cgred /etc/init.d/cgred
cp /usr/share/doc/cgroup-bin/examples/cgred.conf /etc/default/cgred.conf
chmod +x /etc/init.d/cgred

 

Далее правим баги мейнтейнеров Debian:

sed -i 's/sysconfig/default/' /etc/init.d/cgconfig

 

Потом добавялем туда (/etc/cgrules.conf) одну строчку в самый низ:

@www-data cpuacct wwwdata/

 

Далее отключаем создание дефалт группы (в которую система будет помещать все процессы кроме тех, которые помещены в иные группы):

vim /etc/default/cgconfig
CREATE_DEFAULT=no

 

Таким образом мы настраиваем помещение всех процессов пользователя www-data в группу с именем wwwdata.

После этого несколько раз (потому что что-то в скриптах не то и с первого раза у него не выходит размонтировать cgroup) дергаем команду:

/etc/init.d/cgconfig stop

 

И потом запускаем:

/etc/init.d/cgconfig start

 

После этого настраиваем демона cgred, который, собственно, должен распихивать процессы по cgroup:

/etc/init.d/cgred start

 

Но нас постигнет ужасное разочарование — init скрипт тупо взят из RedHat без должной доработки под особенности Debian (впрочем, в /etc/init.d/cgconfig примерное тоже самое — там вместо /etc/default используется путь /etc/sysconfig):

/etc/init.d/cgred: line 43: /etc/rc.d/init.d/functions: No such file or directory
Starting CGroup Rules Engine Daemon: /etc/init.d/cgred: line 85: daemon: command not found

Чтобы исправить это открываем /etc/init.d/cgred и комментируем 43ю строку с кодом: «/etc/rc.d/init.d/functions».

Потом ищем строку

daemon --check $servicename --pidfile $pidfile $CGRED_BIN $OPTIONS

и заменяем на:

start-stop-daemon --start --quiet --pidfile $pidfile --exec $CGRED_BIN -- $OPTIONS

 

Далее правим:

sed -i 's/sysconfig/default/' /etc/init.d/cgred

 

Еще нужно исправить группу, от имени которой будет работать демон:

vim /etc/default/cgred.conf

 

и заменяем SOCKET_GROUP=»cgred» на SOCKET_GROUP=»»

Далее нам нужно создать папку для нашей cgroup:

mkdir /mnt/cgroups/cpuacct/wwwdata

 

Итак, запускаем демона:

/etc/init.d/cgred start

 

После этого нужно инициировать перезапуск процессов, которые нам интересно поместить в cgroup, в моем случае это nginx работающий от имени пользователя www-data:

/etc/init.d/nginx reload

 

Все, теперь убеждаемся, что процессы попали в нужную cgroup:

cat /mnt/cgroups/cpuacct/wwwdata/tasks
28623
28624
28625
28626

 

Запуск сайтов от разных пользователей в связке nginx + php-fpm

nginx_php_process_flowПо умолчанию все сайты будут запускаться от пользователя, указанного в настройках php-fpm. Чтобы запускать сайты от разных пользователей, необходимо создать отдельные конфигурационные файлы в директории /etc/php7/fpm/pool.d, убедившись при этом, что в файле /etc/php7/fpm/php-fpm.conf указана строчка:

include=/etc/php7/fpm/pool.d/*.conf

Теперь создаем файл конфигурации для нашего сайта (покажу на примере 891rpm.arthead.ru) /etc/php7/fpm/pool.d/891rpm.arthead.ru.conf:

[891rpm.arthead.ru]
listen = /run/php7-891rpm.sock
listen.mode = 0660
user = 891rpm_com    # user
group = 891rpm_com   # group
chdir = /var/www/891rpm.arthead.ru

php_admin_value[upload_tmp_dir] = /var/www/891rpm.arthead.ru/tmp
php_admin_value[soap.wsdl_cache_dir] = /var/www/891rpm.arthead.ru/tmp
php_admin_value[date.timezone] = Europe/Moscow
php_admin_value[upload_max_filesize] = 100M
php_admin_value[post_max_size] = 100M
php_admin_value[open_basedir] = "/var/www/891rpm.arthead.ru/"
php_admin_value[session.save_path] = /var/www/891rpm.arthead.ru/tmp
php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,parse_ini_file,show_source
php_admin_value[cgi.fix_pathinfo] = 0
php_admin_value[apc.cache_by_default] = 0

# В зависимости от нагрузки меняем параметры
pm = dynamic
pm.max_children = 10
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 4

В файле настроек /etc/nginx/sites-available/891rpm.arthead.ru.conf указываем сокет:

upstream 891rpm-sock {
    server unix:/var/run/php7-891rpm.sock;
}

server {
        listen 80;
        server_name 891rpm.arthead.ru www.891rpm.arthead.ru;
...
location ~ \.php$
        {
                include fastcgi.conf;
                fastcgi_intercept_errors on;
                fastcgi_pass 891rpm-sock;
        }
...
}

/etc/group

www-data:x:33:
891rpm_com:x:1001:www-data

 

Перезапускаем php-fpm и nginx:

sudo /etc/init.d/php7-fpm restart
sudo /etc/init.d/nginx restart

Теперь сайт будет запускаться от указанного пользователя.

Как разбить tar.gz архив на тома и как его потом склеить

tar-gzДля того чтобы создать разбитый на тома архив в консоли нужно выполнить обычное архивирование с передачей результата в OUTPUT и таким образом на вторую команду конвейера, которая и разбивает полученный результат на тома нужного размера.

tar czf - ./backup | split -d -b 10m - backup.tar.gz.

(Про точку в конце первой команды не забываем)

В результате получится несколько файлов по 10 Мб с окончанием .01 .02 .03 и т.д.

Для того чтобы потом склеить полученные тома нужно выполнить cat с передачей через OUTPUT данных архиватору.

cat backup.tar* | tar xzf -

 

Accessing and debugging the Django development server from another machine

In continuing my series about setting up a Linux virtual machine for Django development (see parts one and two), I wanted to share another tip for accessing the VM from the host machine.

Set up development server to listen to external requests

By default, when using the Django management runserver command, the development server will only listen to requests originating from the local machine (using the loopback address 127.0.0.1). Luckily, the runserver command accepts an IP address and port. Specifying 0.0.0.0 will allow the server to accept requests from any machine:

python manage.py runserver 0.0.0.0:8000

I do not have to worry about security issues with the development server listening to all requests, since the VM is protected from external access by a firewall.

Set up server to send debug information to local network

While the first step will allow us to access the development server from the host machine, we will not be able to see debugging information (for example, thedjango-debug-toolbar will not be displayed on the host machine, even if DEBUG is set to True). Django uses another setting, INTERNAL_IPS, to determine which machines are allowed to view debugging information.

For a typical installation, I set INTERNAL_IPS to only specify 127.0.0.1, allowing me to easily debug Django apps running on the local machine.

INTERNAL_IPS = ('127.0.0.1',)

Now, since the setting is a tuple, we could easily add the IP address of the host machine and call it a day. We would run into the same problem as the last post, which is the reason I installed Samba in order to access the VM using its host name instead of IP address. Whenever the machine gets a new IP address from the DHCP server, I would have to update the setting. I found a great solution to this problem in this snippet, which adds wildcard support to INTERNAL_IPS. I personally put this code inside an if block, so that it only executes in a development setting. Here is the code from the end of my settings.py file:

if DEBUG:

    from fnmatch import fnmatch
    class glob_list(list):
        def __contains__(self, key):
            for elt in self:
                if fnmatch(key, elt): return True
            return False

    INTERNAL_IPS = glob_list(['127.0.0.1', '192.168.*.*'])

Now, I can access and debug Django projects using the development server from any machine on my local network.

How to Install and Connect to PostgreSQL on CentOS 7

postgresqlPostgreSQL (pronounced ‘post-gres-Q-L’) is a free, open-source object-relational database management system (object-RDBMS), similar to MySQL, and is standards-compliant and extensible. It is commonly used as a back-end for web and mobile applications. PostgreSQL, or ‘Postgres’ as it is nicknamed, adopts the ANSI/ISO SQL standards together, with the revisions.

Pre-Flight Check
  • These instructions are intended specifically for installing PostgreSQL on CentOS 7.
  • I’ll be working from a Liquid Web Self Managed CentOS 7 server, and I’ll be logged in as root.
Step 1: Add the PostgreSQL 9.3 Repository

In this case we want to install PostgreSQL 9.3 directly from the Postgres repository. Let’s add that repo:

wget http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-1.noarch.rpm
rpm -ihvU pgdg-centos94-9.4-1.noarch.rpm

Step 2: Install PostgreSQL

First, you’ll follow a simple best practice: ensuring the list of available packages is up to date before installing anything new.

yum -y update

Then it’s a matter of just running one command for installation via apt-get:

yum -y install postgresql94 postgresql94-server postgresql94-contrib postgresql94-libs --disablerepo=* --enablerepo=pgdg94

PostgreSQL should now be installed.

Step 3: Start PostgreSQL

Configure Postgres to start when the server boots:

systemctl enable postgresql-9.4

Start Postgres:

/usr/pgsql-9.4/bin/postgresql94-setup initdb

systemctl start postgresql-9.4

Step 4: Switch to the Default PostgreSQL User

As part of the installation Postgres adds the system user postgres and is setup to use “ident” authentication. Rolesinternal to Postgres (which are similar to users) match with a system user account.

Let’s switch into that system user:

su – postgres

And then connect to the PostgreSQL terminal (in the postgres role):

$ psql

That’s it! You’re connected and ready to run commands in PostgreSQL as the postgres role.

Colored Bash man pages and Log files

colored_bash

Below you can find bash codes which you can use with man pages or while reading log files:

'\e[0;30m' # Black - Regular
'\e[0;31m' # Red
'\e[0;32m' # Green
'\e[0;33m' # Yellow
'\e[0;34m' # Blue
'\e[0;35m' # Purple
'\e[0;36m' # Cyan
'\e[0;37m' # White

'\e[1;30m' # Black - Bold
'\e[1;31m' # Red
'\e[1;32m' # Green
'\e[1;33m' # Yellow
'\e[1;34m' # Blue
'\e[1;35m' # Purple
'\e[1;36m' # Cyan
'\e[1;37m' # White

'\e[4;30m' # Black - Underline
'\e[4;31m' # Red
'\e[4;32m' # Green
'\e[4;33m' # Yellow
'\e[4;34m' # Blue
'\e[4;35m' # Purple
'\e[4;36m' # Cyan
'\e[4;37m' # White

'\e[40m'   # Black - Background
'\e[41m'   # Red
'\e[42m'   # Green
'\e[43m'   # Yellow
'\e[44m'   # Blue
'\e[45m'   # Purple
'\e[46m'   # Cyan
'\e[47m'   # White
'\e[0m'    # Text Reset

If you are using less application for viewing man pages you can add below variables to your .bashrc file. After that reload .bashrc variables with command: source /home/user/.bashrc and you will see colored man pages.

export LESS_TERMCAP_mb=$'\E[01;31m'
export LESS_TERMCAP_md=$'\E[01;31m'
export LESS_TERMCAP_me=$'\E[0m'
export LESS_TERMCAP_se=$'\E[0m'
export LESS_TERMCAP_so=$'\E[01;44;33m'
export LESS_TERMCAP_ue=$'\E[0m'
export LESS_TERMCAP_us=$'\E[01;32m'

For log files use perl syntax with your combined regex:

 tail -f  /var/log/mail.log | perl -pe 's/(fatal|error|panic|success)/\e[1;31m$&\e[0m/g'

Last colored trick. Use grep command with —color switch.

Dig

digDig is a powerful Linux tool and today I’ll demonstrate some useful everyday examples including a reverse lookup, zone transfer, and how to find the SOA (start of authority) in a zone file.

So what is dig?

man dig

«dig (domain information groper) is a flexible tool for interrogating DNS name servers.»

A simple example

How to find the IP address (A record) associated with a domain:

dig tomhayman.co.uk +short

Which outputs:

75.127.99.28

Reverse lookup example

How to find the domain name associated with an IP address:

dig -x 75.127.99.28 +short

Which outputs:

zoe.asmallorange.com.

(For more information remove +short)

Zone transfer example

First, find the name server to query:

dig ns tomhayman.co.uk +short

Which outputs:

ns1.asmallorange.com.
ns2.asmallorange.com.

Then:

dig -t axfr @ns1.asmallorange.com tomhayman.co.uk

Which outputs:

; <<>> DiG 9.3.4-P1 <<>> -t axfr @ns1.asmallorange.com tomhayman.co.uk
; (1 server found)
;; global options:  printcmd
; Transfer failed.

But the transfer failed!  This is normally due to security settings on the name server.  Sometimes you can request this to be removed, although most providers prevent it.

However, some organisations allow this behaviour.  One of them is Wikipedia

So if we try the process again:

dig ns wikipedia.org +short

Which outputs:

ns0.wikimedia.org.

Then:

dig -t axfr @ns0.wikimedia.org wikipedia.org | head -n 10

Which outputs:

; <<>> DiG 9.3.4-P1 <<>> -t axfr @ns0.wikimedia.org wikipedia.org
; (1 server found)
;; global options:  printcmd
wikipedia.org.          86400   IN      SOA     ns0.wikimedia.org. hostmaster.wikimedia.org. 2010082803 43200 7200 1209600 3600
wikipedia.org.          3600    IN      A       208.80.152.2
wikipedia.org.          86400   IN      NS      ns0.wikimedia.org.
wikipedia.org.          86400   IN      NS      ns1.wikimedia.org.
wikipedia.org.          86400   IN      NS      ns2.wikimedia.org.
wikipedia.org.          3600    IN      MX      50 lists.wikimedia.org.

(N.B. I used head to output the first 10 lines only as wikipedia.org has thousands of CNAME’s)

Start of authority (SOA) example

Find the SOA record in a zone file:

dig +nocmd wikipedia.org any +multiline +noall +answer

Which outputs:

wikipedia.org.          1589 IN A 208.80.152.2
wikipedia.org.          84389 IN NS ns0.wikimedia.org.
wikipedia.org.          84389 IN SOA ns0.wikimedia.org. hostmaster.wikimedia.org. (
2010082803 ; serial
43200      ; refresh (12 hours)
7200       ; retry (2 hours)
1209600    ; expire (2 weeks)
3600       ; minimum (1 hour)
)
wikipedia.org.          1589 IN MX 50 lists.wikimedia.org.

Dig can do a lot more than the examples I’ve illustrated today.  You can build some useful scripts with it too, which I’ll demonstrate at another time.

Конвертация БД из CP1251 в UTF8

Конвертацию БД из Win-1251 в UTF8 можно произвести разными способами, но самый быстрый и простой — использование SQL-запроса, приведенного ниже.

ALTER TABLE `db_name`.`table_name` CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;

При помощи этого запроса, можно конвертировать таблицу базы данных в любую, доступную в MySQL кодировку. Но что делать, если таблиц 100, 200 или больше, и все таблицы необходимо перекодировать в UTF8 из Win-1251? Для решения этой проблемы, можно отправить в MySQL запрос, который сгенерирует необходимые SQL-запросы для всех таблиц БД. При использовании PHPMyAdmin останется только скопировать результаты и запустить их как SQL-запрос:

SELECT CONCAT( 'ALTER TABLE `', t.`TABLE_SCHEMA` , '`.`', t.`TABLE_NAME` , '` CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) AS sqlcode
FROM `information_schema`.`TABLES` t
WHERE 1
AND t.`TABLE_SCHEMA` = 'My_DB_for_convert'
ORDER BY 1
LIMIT 0 , 90

Этот запрос будет работать в MySQL версии 5 и выше. My_DB_for_convert — следует заменить на имя БД, таблицы в которой необходимо конвертировать в UTF-8.

utf8_general_ci или utf8_unicode_ci

utf8_general_ci и utf8_unicode_ci отличаются только скоростью работы и порядком сортировки. Поскольку utf8_general_ci работает быстрее — это и есть предпочтительный выбор. Подробнее, про отличия utf8_general_ci и utf8_unicode_ci можно прочитать в официальной документации MySQL. В случаее необходимости, способом описанным выше, можно будет изменить подкодировку UTF8 c utf8_general_ci на utf8_unicode_ci.