A simple filtering syntax tree in Python

Working on various pieces of software those last years, I noticed that there’s always a feature that requires implementing some DSL.

The problem with DSL is that it is never the road that you want to go. I remember how creating my first DSL was fascinating: after using programming languages for years, I was finally designing my own tiny language!

A new language that my users would have to learn and master. Oh, it had nothing new, it was a subset of something, inspired by my years of C, Perl or Python, who knows. And that’s the terrible part about DSL: they are an marvelous tradeoff between the power that they give to users, allowing them to define precisely their needs and the cumbersomeness of learning a language that will be useful in only one specific situation.

In this blog post, I would like to introduce a very unsophisticated way of implementing the syntax tree that could be used as a basis for a DSL. The goal of that syntax tree will be filtering. The problem it will solve is the following: having a piece of data, we want the user to tell us if the data matches their conditions or not.

To give a concrete example: a machine wants to grant the user the ability to filter the beans that it should keep. What the machine passes to the filter is the size of the current grain, and the filter should return either true or false, based on the condition defined by the user: for example, only keep beans that are bigger that are between 1 and 2 centimeters or between 4 and 6 centimeters.

The number of conditions that the users can define could be quite considerable, and we want to provide at least a basic set of predicate operators: equalgreater than and lesser than. We also want the user to be able to combine those, so we’ll add the logical operators or and and.

A set of conditions can be seen as a tree, where leaves are either predicates, and in that case, do not have children, or are logical operators, and have children. For example, the propositional logic formula φ1 ∨ (φ2 ∨ φ3) can be represented with as a tree like this:

Starting with this in mind, it appears that the natural solution is going to be recursive: handle the predicate as terminal, and if the node is a logical operator, recurse over its children.
Since we will be doing Python, we’re going to use Python to evaluate our syntax tree.

The simplest way to write a tree in Python is going to be using dictionaries. A dictionary will represent one node and will have only one key and one value: the key will be the name of the operator (equalgreater thanorand…) and the value will be the argument of this operator if it is a predicate, or a list of children (as dictionaries) if it is a logical operator.

For example, to filter our bean, we would create a tree such as:

{"or": [
  {"and": [
    {"ge": 1},
    {"le": 2},
  ]},
  {"and": [
    {"ge": 4},
    {"le": 6},
  ]},
]}

The goal here is to walk through the tree and evaluate each of the leaves of the tree and returning the final result: if we passed 5 to this filter, it would return True, and if we passed 10 to this filter, it would return False.

Here’s how we could implement a very depthless filter that only handles predicates (for now):

import operator

class InvalidQuery(Exception):
    pass

class Filter(object):
    binary_operators = {
        "eq": operator.eq,
        "gt": operator.gt,
        "ge": operator.ge,
        "lt": operator.lt,
        "le": operator.le,
    }

    def __init__(self, tree):
        # Parse the tree and store the evaluator
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        # Call the evaluator with the value
        return self._eval(value)

    def build_evaluator(self, tree):
        try:
            # Pick the first item of the dictionary.
            # If the dictionary has multiple keys/values
            # the first one (= random) will be picked.
            # The key is the operator name (e.g. "eq")
            # and the value is the argument for it
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
        try:
            # Lookup the operator name
            op = self.binary_operators[operator]
        except KeyError:
            raise InvalidQuery("Unknown operator %s" % operator)
        # Return a function (lambda) that takes
        # the filtered value as argument and returns
        # the result of the predicate evaluation
        return lambda value: op(value, nodes)

You can use this Filter class by passing a predicate such as {"eq": 4}:

>>> f = Filter({"eq": 4})
>>> f(2)
False
>>> f(4)
True

This Filter class works but is quite limited as we did not provide logical operators. Here’s a complete implementation that supports binary operators and and or:

import operator


class InvalidQuery(Exception):
    pass


class Filter(object):
    binary_operators = {
        u"=": operator.eq,
        u"==": operator.eq,
        u"eq": operator.eq,

        u"<": operator.lt,
        u"lt": operator.lt,

        u">": operator.gt,
        u"gt": operator.gt,

        u"<=": operator.le,
        u"≤": operator.le,
        u"le": operator.le,

        u">=": operator.ge,
        u"≥": operator.ge,
        u"ge": operator.ge,

        u"!=": operator.ne,
        u"≠": operator.ne,
        u"ne": operator.ne,
    }

    multiple_operators = {
        u"or": any,
        u"∨": any,
        u"and": all,
        u"∧": all,
    }

    def __init__(self, tree):
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        return self._eval(value)

    def build_evaluator(self, tree):
        try:
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
        try:
            op = self.multiple_operators[operator]
        except KeyError:
            try:
                op = self.binary_operators[operator]
            except KeyError:
                raise InvalidQuery("Unknown operator %s" % operator)
            return lambda value: op(value, nodes)
        # Iterate over every item in the list of the value linked
        # to the logical operator, and compile it down to its own
        # evaluator.
        elements = [self.build_evaluator(node) for node in nodes]
        return lambda value: op((e(value) for e in elements))

To support the and and or operators, we leverage the all and any built-in Python functions. They are called with an argument that is a generator that evaluates each one of the sub-evaluator, doing the trick.

Unicode is the new sexy, so I’ve also added Unicode symbols support.

And it is now possible to implement our full example:

>>> f = Filter(
...     {"∨": [
...         {"∧": [
...             {"≥": 1},
...             {"≤": 2},
...         ]},
...         {"∧": [
...             {"≥": 4},
...             {"≤": 6},
...         ]},
...     ]})
>>> f(5)
True
>>> f(8)
False
>>> f(1)
True

As an exercise, you could try to add the not operator, which deserve its own category as it is a unary operator!

In the next blog post, we will see how to improve that filter with more features, and how to implement a domain-specific language on top of it, to make humans happy when writing the filter!

Hole and Henni – François Charlier, 2018

In this drawing, the artist represents the deepness of functional programming and how its horse power can help you escape many dark situations.

 

Julien Danjou

https://julien.danjou.info/simple-filtering-syntax-tree-in-python/

Установка NPM пакетов глобально без sudo

Node.js набирает огромную популярность. Одна из самых его замечательных особенностей — NPM пакеты или модули. По-умолчанию они устанавливаются локально, в директорию откуда вы запустили команду. Однако есть способ установки NPM пакетов глобально. Проблема в том что для этого вам нужно запускать команду установки пакетов с правами root пользователя.

К счастью эту проблему можно исправить простыми шагами.

1. Создание директории для глобальных пакетов

$ mkdir ~/.npm-packages

2. Указать где будут находиться пакеты с помощью .bashrc

$ NPM_PACKAGES="${HOME}/.npm-packages"

3. Указать npm где вы собираетесь хранить глобальные пакеты

Для этого откройте файл ~/.npmrc с помощью текстового редактора и вставьте следующую строку:

prefix=${HOME}/.npm-packages

4. Убедитесь, что Node.js будет знать где находятся пакеты

Откройте опять ~/.bashrc с помощью текстового редактора и вставьте следующие строки:

NODE_PATH="$NPM_PACKAGES/lib/node_modules:$NODE_PATH"
PATH="$NPM_PACKAGES/bin:$PATH"
unset MANPATH
MANPATH="$NPM_PACKAGES/share/man:$(manpath)"

Если все предыдущие шаги вам кажутся слишком сложными, то можете воспользоваться скриптом npm-g_nosudo, он все предыдущие шаги делает автоматически.

How to monitor traffic at Cisco router using Netflow

By default Cisco IOS doesn’t provide any traffic monitoring tools like iftop or iptraff available in Linux. While there are lots of proprietary solutions for this purpose including Cisco Netflow Collection, you are free to choose nfdump and nfsen open source software to monitor traffic of one or many Cisco routers and get detailed monitoring data through your Linux command line or as graphs at absolutely no cost.

Below is beginner’s guide that helps to quickly deploy netflow collector and visualizer under Linux and impress everybody by cute and descriptive graphs like these:

It is highly recommended to look through Netflow basics to get brief understanding of how it works before configuring anything. For example, here is Cisco’s document that gives complete information about Netflow. In a few words to get started you should enable netflow exporting on Cisco router and point it to netflow collector running under Linux. Exported data will contain complete information about all packets the router has received/sent so nfdump and nfsen working under Linux will collect it and visualize to present you the graph like above example.

Cisco Router Setup

1. Enable flow export on ALL Cisco router’s interfaces that send and receive some traffic, here is an example:

Router1# configure terminal
Router1(config)#interface FastEthernet 0/0
Router1(config-if)#ip route-cache flow input
Router1(config-if)#interface FastEthernet 0/1
Router1(config-if)#ip route-cache flow input
...

2. Setup netflow export:

Router1# configure terminal
Router1(config)#ip flow-export source FastEthernet0/0
Router1(config)#ip flow-export source FastEthernet0/1
Router1(config)#ip flow-export version 5
Router1(config)#ip flow-export destination 1.1.1.1 23456

Where 1.1.1.1 is IP address of Linux host where you plan to collect and analyze netflow data. 23456 is port number of netflow collector running on Linux.

Linux Setup

1. Download and install nfdump.

cd /usr/src/
wget http://sourceforge.net/projects/nfdump/files/stable/nfdump-1.6.2/nfdump-1.6.2.tar.gz/download
tar -xvzf nfdump-1.6.2.tar.gz
cd nfdump-1.6.2
./configure --prefix=/ --enable-nfprofile
make
make install

2. Download and install nfsen.

It requires web server with php module and RRD so make sure you have the corresponding packages installed. I hope you’re running httpd with php already so below are rrd/perl related packages installation hints only.

Fedora/Centos/Redhat users should type this:

yum install rrdtool rrdtool-devel rrdutils perl-rrdtool

Ubuntu/Debian:

aptitude install rrdtool librrd2-dev librrd-dev librrd4 librrds-perl librrdp-perl

If you run some exotic Linux distribution just install everything that is related to rrd + perl.

At last, nfsen installation:

cd /usr/src/
wget http://sourceforge.net/projects/nfsen/files/stable/nfsen-1.3.5/nfsen-1.3.5.tar.gz/download
tar -xvzf nfsen-1.3.5.tar.gz
cd nfsen-1.3.5
cp etc/nfsen-dist.conf etc/nfsen.conf

In order to continue you should edit file etc/nfsen.conf to specify where to install nfsen, web server’s username, its document root directory etc. That file is commented so there shouldn’t be serious problems with it.

One of the major sections of nfsen.conf is ‘Netflow sources’, it should contain exactly the same port number(s) you’ve configured Cisco with — recall ‘ip flow-export …’ line where we’ve specified port 23456. E.g.

%sources = (
    'Router1'    => { 'port' => '23456', 'col' => '#0000ff', 'type' => 'netflow' },
);

Now it’s time to finish the installation:

./install.pl etc/nfsen.conf

In case of success you’ll see corresponding notification after which you will have to start nfsen daemon to get the ball rolling:

/path/to/nfsen/bin/nfsen start

From this point nfdump started collecting netflow data exported by Cisco router and nfsen is hardly working to visualize it — just open web browser and go to http://linux_web_server/nfsen/nfsen.php to make sure. If you see empty graphs just wait for a while to let nfsen to collect enough data to visualize it.

That’s it!

RTMP сервер на Debian Linux

Установка

Затем устанавливаем дополнительные пакеты:

apt-get update
apt-get install java-package
apt-get install sun-java6-jdk
apt-get install sun-java6-jre
apt-get install ant

Установка Red5

wget https://github.com/Red5/red5-server/releases/download/v1.0.8-M13/red5-server-1.0.8-M13.tar.gz
tar xvfz red5-server-1.0.8-M13.tar.gz
mv red5-server-1.0.8-M13 red5
mv red5 /usr/share/

Запускаем:

cd /usr/share/red5
sh red5.sh

В результате увидим работающий сервер Red5 по адресу: http://localhost:5080

Если работает firewall, то добавляем порты:

ACCEPT tcp -- anywhere anywhere tcp dpt:1935 
ACCEPT tcp -- anywhere anywhere tcp dpt:5080 
ACCEPT tcp -- anywhere anywhere tcp dpt:omniorb 
ACCEPT tcp -- anywhere anywhere tcp dpt:8443

В качестве плеера советую как вариант использовать video.js.

RainbowCrack

rainbowcrack

Собрал на днях полигон для прогона RainbowCrack «тестов».

Пока доступны алгоритмы: SHA1, MD5

Таблицы:

md5_mixalpha-numeric#1-8
(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789)
Вероятность подбора: 99.9%
127GB

md5_mixalpha-numeric#1-9
(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789)
Вероятность подбора: 96.8%
1009GB

md5_mixalpha-numeric-all-space#1-8
(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789 )
Вероятность подбора: 99.9%
1049GB

sha1_mixalpha-numeric#1-8
(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789)
Вероятность подбора: 99.9%
127GB

sha1_mixalpha-numeric#1-9
(ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789)
Вероятность подбора: 96.8%
690GB

 

Конфигурация железа в данной генерации:

Core i7 4790 CPU (rcrack)
2 x GeForce GTX 680 GPU (rcrack_cuda)
4 x Seagate ST2000DM001 собраны в RAID 0
4GB DDR3

Основной просчет ведется на 2-х GPU, но для тестов производительности доступен также вариант с прогоном на CPU.

 

Производительность для 1 хэша

В текущей конфигурации скорость вычислений доходит до:

8.96E+12 (Plaintexts per Second) на 2-х GPU GeForce GTX 680

9.21E+11 (Plaintexts per Second) при использовании CPU i7 4790.

Открыт ограниченный бесплатный 🙂 доступ по адресу: http://891rpm.arthead.ru/rc/

0. [2016-01-20] Доступ предоставляется по запросу на странице сервиса.
1. [2016-01-20] Введено ограничение на количество одновременных запусков тестов.
2. [2016-01-26] Введено ограничение на обращение к сервису с одного IP.

В планах:

0. Добавить кириллицу в таблицы
1. Добавить алгоритмы RIPEMD-160 и MySQL SHA1

P.S. Front-End для сервиса (для своей тренировки) написан на Django.
Позже выложу исходники 🙂

Визуальное распознавание хэшей

john-to-crack-the-password

13-символьный хэш:

DES (UNIX)

FkL6hgPZ138Ug
EZUv/lAcqf06.

 

16-символьный хэш:

MySQL

29bad1457ee5e49e​

 

32-символьные хэши:

без соли: md5 или md5($md5)

1a1dc91c907325c69271ddf0c944bc72​

с солью 2 символа: md5($salt.$pass)

bbde0359d80a56c0765bf30e3116c73d:b0​

с солью 3,30 символов: md5(md5($pass).$salt)

9069b0a70e89821710c7b9c6ddfa1339:*|/
33962b23840f5212ff5f594c3dea1b5a:VhVpcK>xIzU=JYi&|7wje4MWyBF$?#​

с солью 5,8 символов: md5(md5($salt).md5($pass))

8ca78a583e1b35e175ec5bd02e880e35:gEA_Z
66cea44067b962a71d9f578363aae68c:mQHJedIM

с солью 16,32 символа: md5($pass.$salt)

a382a8e7d694cb4fc71d8cda67ee0802:HgtalJ4UaxuSBwSX
d666f494d2ea2bd1819a3ca2e9409f36:LCyAwlMKplHxkFp6SZSfNlnLdBTrOcG6​

 

40-символьные хэши:

MySQL 5

признак: верхний регистр символов

root:*94BDCEBE19083CE2A1F959FD02F964C7Af4CfC29
*32FD2FB910CC84D8E710B431E1C208514F56D9EF
7F44978F28CCD7874293693FD73F4BDDD64321E1

SHA-1

признак: нижний регистр символов

9d4e1e23bd5b727046a9e3b4b7db57bd8d6ee684

​SHA1($username.$pass)

признак: наличие 4-символьной соли, обязательно наличие имени пользователя (username)

user:45f106ef4d5161e7aa38cf6c666607f25748b6ca:bf76

 

Другие хэши:

MD5 (UNIX)

признак: наличие $1$ в начале хэша

$1$dNSCl38g$f0hqUX9K7lr3hFzU4JspZ0

MD5 (WordPress)

признак: наличие $P$B или $P$9 в начале хэша

$P$BHUnawZ54ZdpoZOm4sbVAK0

MD5 (PHPBB3)

признак: наличие $H$7 или $H$9 в начале хэша

$H$9x9g17Renn7Nk1l8MG64nD1

​MD5 (APR)

признак: наличие $apr1$ в начале хэша

$apr1$$kRqAZHnuzcwDL84Mm7oc1.

​OpenBSD Blowfish

признак: наличие $2a$ в начале хэша

$2a$08$Pv6/4g5LwwisUCJmim/tR.CT7vXfUYjsSqDfZ/YU.1urjzNmQFQum

SHA-256 (UNIX)

признак: наличие $5$ в начале хэша

$5$1$6rPISQo58O3bm0PRwPmc3uhLi.TPE1NhHq0VIVf1X/8​

SHA-512 (UNIX)

признак: наличие $6$ в начале хэша

$6$1$RRbbJXv8x38tKhWFDQ3m9bE1L/2yteMGAJ7E6h1OMqhpFDO3EHUvv3YD0oX0NywDa.toXreflU/VBJ2dwKTyM0

Adding ES7 Class Properties to an ES6 React Component

ECMAScript 6

I’ve been investigating React and Flux over the last three posts, but one thing really didn’t sit right with me. That one thing is the React PropTypes in an ES6 class. I had to do something like this:

import React from 'react';
 
class NavBrand extends React.Component {
  render() {
    return(<h1>{this.props.title}</h1>});
  }
}
 
NavBrand.propTypes = {
  title: React.PropTypes.string.isRequired
};
 
export default NavBrand

It’s ES6 code, but the fact that I have to munge the class after it has been created did not sit well. I wanted the propTypes to be a part of the class – just like a class in just about every other language.

Fortunately, there is a current proposal for class properties in ES7 (or ES2016 or ES vNext). It’s not ratified yet and the proposal may change. The ES6 version (above) is ratified and perfectly valid code. Let’s look at what the alternate ES7 version would look like:

import React from 'react';
 
export default class NavBrand extends React.Component {
  static propTypes = {
    title: React.PropTypes.string.isRequired
  };
 
  render() {
    return (<h1>{this.props.title}</h1>);
  }
}

I like the aesthetics of this version much more than the ES6 version. The propTypes is in the right place.

The bad news: This doesn’t lint and it doesn’t compile.
The good news: We can fix that!

Linting

I use eslint for my linter. You can change the parser that eslint uses so that it uses babel just like the compiler stage. Anything that would go through the compiler will go through the linter as well. To do this I need another package –babel-eslint:

npm install --save-dev babel-eslint

Then I need to adjust my .eslintrc file to use the new parser:

"parser": "babel-eslint",
"plugins": [
  "react"
],

Linting will pass if you run eslint on that JSX file, or if you run gulp eslint from my tutorial code. If you introduce an error (say, removing the semi-color from the return statement), eslint will still catch that.

Transpiling

To fix the transpiling, I need to make changes to my task. Here is the new task:

var babelify = require('babelify'),
    browserify = require('browserify'),
    gulp = require('gulp'),
    rename = require('gulp-rename'),
    source = require('vinyl-source-stream');
 
var config = {
    dest: './wwwroot'
};
 
var files = {
    entry: './app.jsx'
};
 
gulp.task('bundle', ['eslint'], function () {
    var bundler = browserify({
        extensions: ['.js', '.jsx'],
        transform: [babelify.configure({
          optional: [ "es7.classProperties" ]
        })],
        debug: true // produce source maps
    });
 
    return bundler.add(files.entry)
        .bundle()
        .pipe(source(files.entry))
        .pipe(rename('bundle.js'))
        .pipe(gulp.dest(config.dest));
});

Of course, you will put the requires at the top of your gulp file, and have extra stuff in the config and files objects. However, the task itself works. Note that I need to configure babelify to add in an optional configuration list that enables es7.classProperties. If your code uses one of the other optional experimental features, you can list those too.

Thanks to this, I can now convert all my React components to use the ES7 class properties syntax. I hope this proposal makes it in.

http://shellmonger.com/2015/08/21/adding-es7-class-properties-to-an-es6-react-component/

Настройка разрешения экрана в Linux

linux-resolution1. Используем cvt для вычисления параметров, которые будем использовать в xrandr:

cvt 2560 1440

Пример:

# 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz
Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync

 

2. Используем параметры cvt в xrandr:

xrandr --newmode "2560x1440" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -Hsync +Vsync

(Параметры Hsync и Vsync должны начинаться с заглавных букв).

 

3. Добавляем новый режим «2560×1440» в xrandr:

xrandr --addmode Virtual1 "2560x1440"

(Можно заменить Virtual1 на то, что нужно… Посмотреть можно командой xrandr)

 

4. Включить новый режим:

xrandr --output Virtual1 --mode "2560x1440"

 

WGET. Лучшие практики использования

wget

Загрузка всех URL, указанных в файле FILE:

$ wget -i FILE

Скачивание файла в указанный каталог (-P):

$ wget -P /path/for/save ftp://ftp.example.org/some_file.iso

Использование имя пользователя и пароля на FTP/HTTP:

$ wget ftp://login:password@ftp.example.org/some_file.iso
$ wget --user=login --password=password ftp://ftp.example.org/some_file.iso

Скачивание в фоновом режиме (-b):

$ wget -b ftp://ftp.example.org/some_file.iso

Продолжить (-c continue) загрузку ранее не полностью загруженного файла:

$ wget -c http://example.org/file.iso

Скачать страницу с глубиной следования 10, записывая протокол в файл log:

$ wget -r -l 10 http://example.org/ -o log

Скачать содержимое каталога http://example.org/~luzer/my-archive/ и всех его подкаталогов, при этом не поднимаясь по иерархии каталогов выше:

$ wget -r --no-parent http://example.org/~luzer/my-archive/

Для того, чтобы во всех скачанных страницах ссылки преобразовывались в относительные для локального просмотра, необходимо использовать ключ -k:

$ wget -r -l 10 -k http://example.org/

Также поддерживается идентификация на сервере:

$ wget --save-cookies cookies.txt \
  --post-data 'user=foo&password=bar' \
  http://example.org/auth.php

Скопировать весь сайт целиком:

$ wget -r -l 0 -k http://example.org/

Скачивание галереи картинок с превьюшками.

$ wget -r -k -p -l1 -I /images/ -I /thumb/ \
  --execute robots=off www.example.com/gallery.html

Сохранить веб страницу (как она открывается локально) в текущую директорию

$ (cd cli && wget -nd -pHEKk http://www.pixelbeat.org/cmdline.html)

Продолжить скачивание частично скаченного файла

$ wget -c http://www.example.com/large.file

Скачать множество файлов в текущую директорию

$ wget -r -nd -np -l1 -A '*.jpg' http://www.example.com/

Отображать вывод напрямую (на экран)

$ wget -q -O- http://www.pixelbeat.org/timeline.html | grep 'a href' | head

Скачать url в 01:00 в текущую директорию

$ echo 'wget url' | at 01:00

Сделать закачку с уменьшенной скоростью В данном случае 20 КB/s

$ wget --limit-rate=20k url

Проверить ссылки в файле

$ wget -nv --spider --force-html -i bookmarks.html

Оперативно обновлять локальную копию сайта (удобно использовать с cron)

$ wget --mirror http://www.example.com/

Используем wildcard для того чтобы скачать несколько страниц

$ wget http://site.com/?thread={1..100}
$ wget http://site.com/files/main.{css,js}

Запустить скачивание списка ссылок в 5 потоков

$ cat links.txt | xargs -P 5 wget {}

Проверить ссылки из файла на живость

$ cat list.txt
http://yandex.ru
http://google.ru
http://yandex.ru/qweqweqweqwe
$ wget -nv  --spider -i list.txt
2013-08-08 22:40:20 URL: http://www.yandex.ru/ 200 Ok
2013-08-08 22:40:20 URL: http://www.google.ru/ 200 OK
http://yandex.ru/qweqweqweqwe:
Удалённый файл не существует — битая ссылка!

Источники: