Category Archives: Development

RBLTracker: Automated, Real-Time Black List (RBL) Tracking

RBLTracker is a new project I’ve been working on- an automated, real-time black list (RBL) tracking service.

rbltracker3

RBL and URIBL Monitoring

The RBLTracker system automatically scans over 60 RBLs,  and 20 URIBLs, multiple times per day, to see if any of your IP addresses or website domains are listed, giving you the peace of mind you need to focus on your business.

The list of RBLs that RBLTracker monitors will always be kept up to date with the most current list.

rbltracker4

Hosts

RBLTracker is a fully automated monitoring service, which checks your IP addresses and website domains against the most frequently used real-time black lists (RBLs) and Safe Browsing Databases.

rbltracker1

 

Contacts

Get alerted immediately when one of your hosts is found on an RBL, URIBL, or in a Safe Browsing database.

Your RBLTracker account can be configured with multiple email addresses and phone numbers, for receiving alerts about your hosts. Each contact can be individually configured with different notification rules, to control how each contact receives alerts when one of your hosts is blocked.

rbltracker2

Google Safe Browsing

The Google Safe Browsing database includes lists of website domains that may be dangerous to visitors, because they are suspected of phishing or malware.

RBLTracker will check your websites against the Google Safe Browsing database, and alert you immediately if any errors are found, ensuring that your visitors can reach your websites.

safe_browsing_mac

API Access

RLBTracker includes a simple, read-only, REST based API, that lets you poll our database for the current status of your hosts.

The RBLTracker API can easily be integrated into existing monitoring systems, like Nagios or Zabbix, by performing a simple HTTP GET request for the list of currently blocked hosts. The response data can be returned either as simple XML, or as a JSON object.

<?php
    echo file_get_contents('https://rbltracker.com/api/blocks.json?api_token=123');
?>

{
    "status_code": 200,
    "status_message": "Ok",
    "total_blocks": 1,
    "data": [
        {
            "id": "5afd618836c251cbb066803f25b87fa1",
            "host": "192.168.1.1",
            "name": "Primary Mail Server",
            "status": "active",
            "last_checked": "2012-12-30 21:00:07 EST",
            "first_blocked": "2012-12-17 11:05:03 EST",
            "block_period": "13 days 13:35:58",
            "blocked": "1"
        }
    ]
}

Don’t let your customers be the first to know when your email systems or websites get blocked.

Signup for FREE today!

 

Net_DNS2 Version 1.2.5 Released

I’ve released version 1.2.5 of the PEAR Net_DNS2 library- you can install it now through the command line PEAR installer:

pear install Net_DNS2

Or download it directly from the Google Code page here.

This release includes some important fixes to the way I was calculating the offset values when building the DNS packets. Here is the full list of changes for this release:

  • changed the socket_connect() code to start off non-blocking, and call select() after connect() so a timeout on a invalid server works properly
  • added the new TLSA RR – RFC 6698
  • fixed the socket defines again; apparently the values of the SOCK are different under solaris
  • changed the Net_DNS2_Updater::update() so you can pass a reference to a variable that will be populated with the response object
  • moved the lines that add the response server/type to after the is_null() check- it should have been there to begin with.
  • fixed a whole bunch of cases where I wasn’t incrementing the offset values properly
  • added support to set the RD (recursion desired) bit when making a request

How to Make Images Look Good on the IPad 3

The new Apple IPad (IPad 3) came out on the 16th, and probably the first thing I noticed about it is how great the screen looks. It’s armed with same “retina” display that the IPhone 4 came out with a few years ago. It’s great! Except for one thing- a lot of images on the web look like crap now!

(This first image is the standard resolution, the second is optimized for IPhone 4 and IPad 3 screens)

    

Now- this isn’t new. The IPhone 4 has the same “quirk”, but I think it’s less noticeable given the the size of it’s screen. It really stands out on the IPad 3’s 9.7 inch screen.

The Problem with Pixels

With the advent of high pixel density displays, the pixel itself is now a relative unit.

According to the CSS 2.1 Spec:

Pixel units are relative to the resolution of the viewing device, i.e., most often a computer display. If the pixel density of the output device is very different from that of a typical computer display, the user agent should rescale pixel values.

So, a CSS “pixel” indicates one point on the “virtual” pixel grid to which your CSS design aligns. This either directly matches the actual device, or it is “somehow” scaled to suite.

Talking about the new IPad 3 specifically, the new retina display has a huge 2048 x 1536 pixel resolution- double what most sites are designed for. On a desktop machine, if you doubled your screen resolution, websites would just show up half as big. But on the IPad 3, it stretches the site so it “fills up” the screen. The problem with this is stretching raster images (gif, png, jpeg’s) can make them look really distorted and full of artifacts.

So- how do you fix this?

The easiest way to fix this is to make a second copy of all your images at double the resolution, and then use these versions when visitors are on an IPad or IPhone (or any device that has a higher pixel density).

Fixing it in CSS

The min-device-pixel-ratio media query can be used to target style for high pixel density displays. For the moment vendor prefixes are required, until there is a standard format. For example, Mozilla and Webkit prefixes work the same way, but Opera requires the pixel ratio as a fraction.

-moz-min-device-pixel-ratio: 2
-o-min-device-pixel-ratio: 2/1
-webkit-min-device-pixel-ratio: 2
min-device-pixel-ratio: 2

Right now, we only care about IPhone/IPad, so we’ll use the -webkit-min-device-pixel-ratio tag.

So let’s say you had a single class, loading the 300 x 300 px image:

.logo {
 background-image: url(cat_300.jpg);
 width: 300px;
 height: 300px;
}

You would then create a second copy of the image at 600 x 600 px resolution, and add this in your CSS:

@media only screen and (-webkit-min-device-pixel-ratio: 2) {
 .logo {
   background-image: url(cat_600.jpg);
   background-size: 300px 300px;
 }
}

This loads the 600 x 600 px image, but forces the background-size to 300 x 300 px when the device is a webkit device, and the pixel ratio is 2.

Forcing the 600 x 600 px image into a 300 x 300 px box, forces the image to a pixel density of 2.

Fixing Inline Images

So that’s CSS- what about plain old <img> tags?

You can use the window.devicePixelRatio property in JavaScript to determine if the pixel density of the screen is > 1 and if so, cycle through all the images on the page and change their image src to the higher resolution image.

An easy way to do this is to add a class to all the images you want to replace. In this case, I’ve added the “hd” class to the image tags.

<img src="cat_300.jpg" width="300" height="300" class="hd" />

Then add some simple JavaScript to update the tags. In my example, I’ve used jQuery just to make things easier, and simply did a text replace in the src image name, changing the “300” to a 600″. The width/height of the <img> tag needs to stay at 300 x 300 px, forcing the pixel density of 2.

$(document).ready(function()
{
    if ( (window.devicePixelRatio) && (window.devicePixelRatio >= 2) )
    {
        var images = $('img.hd');        

        for(var i=0; i<images.length; i++)
        {
            images.eq(i).attr('src', images.eq(i).attr('src').replace('300', '600'))
        }
    }
});

Now, there are all sorts of ways to do this- this is just one example.

The other way to do this is to just always load the higher resolution images. The only downside, is those higher resolution images are likely almost twice the size of their originals- so loading them only when required will save bandwidth.

What about SVG?

SVG (Scalable Vector Graphics) is also another great way to handle this. SVG files are actually XML files that have instructions on how to “draw” the image on a canvas, rather than using a static raster image. SVG files can scale to different sizes/pixel densities, without distorting.

The only downsides with SVG, is that sometimes the file size for complex images are actually a lot bigger than their raster counterparts, and browser support is still incomplete- so if you care about your site working in Internet Explorer, then you still need to have some raster images and do conditional loading.

Parting Thoughts

Remember- you don’t *actually* need to do any of this; a lot of images will still look “fine” being stretched.

But if you want your site to look it’s best, it’s worth spending the time to optimize it for higher pixel density devices- there’s going to be no shortage of them in the coming years!

Net_DNS2 Version 1.2.1

I’ve released version 1.2.1 of the PEAR Net_DNS2 library- you can install it now through the command line PEAR installer:

pear install Net_DNS2

Or download it directly from the Google Code page here.

This is just a small maintenance release to fix a few bugs:

  • changed the Net_DNS2_Sockets::_sock property from private to protected; this was causing some problems when the request was failing.
  • PHP doesn’t support unsigned integers, but many of the RR’s return unsigned values (like SOA), so there is the possibility that the value will overrun on 32bit systems, and you’ll end up with a negative value. So a new function was added to convert the negative value to a string with the correct unsigned value.

How To Mine Twitter Streams from PHP in Real Time

UPDATE: I’ve wrote a new post with an example on how to connect to the v1.1 Twitter API, using OAuth – here.

Need to mine Twitter for tweets related to certain keywords?

No problem-

Twitter provides a pretty simple streaming interface to the onslaught of tweets it receives, letting you specify whatever keywords you want to search for, in a real-time “live” way.

To do this, I created a simple PHP class that can run in the background, collecting tweets for certain keywords:

ctwitter_stream.php

class ctwitter_stream
{
    private $m_username;
    private $m_password;

    public function __construct()
    {
        //
        // set a time limit to unlimited
        //
        set_time_limit(0);
    }

    //
    // set the login details
    //
    public function login($_username, $_password)
    {
        $this->m_username = $_username;
        $this->m_password = $_password;
    }

    //
    // process a tweet object from the stream
    //
    private function process_tweet(array $_data)
    {
        print_r($_data);

        return true;
    }

    //
    // the main stream manager
    //
    public function start(array $_keywords)
    {
        while(1)
        {
            $fp = fsockopen("ssl://stream.twitter.com", 443, $errno, $errstr, 30);
            if (!$fp)
            {
                echo "ERROR: Twitter Stream Error: failed to open socket";
            } else
            {
                //
                // build the request
                //
                $request  = "GET /1/statuses/filter.json?track=";
                $request .= urlencode(implode($_keywords, ',')) . " HTTP/1.1\r\n";
                $request .= "Host: stream.twitter.com\r\n";
                $request .= "Authorization: Basic ";
                $request .= base64_encode($this->m_username . ':' . $this->m_password);
                $request .= "\r\n\r\n";

                //
                // write the request
                //
                fwrite($fp, $request);

                //
                // set it to non-blocking
                //
                stream_set_blocking($fp, 0);

                while(!feof($fp))
                {
                    $read   = array($fp);
                    $write  = null;
                    $except = null;

                    //
                    // select, waiting up to 10 minutes for a tweet; if we don't get one, then
                    // then reconnect, because it's possible something went wrong.
                    //
                    $res = stream_select($read, $write, $except, 600, 0);
                    if ( ($res == false) || ($res == 0) )
                    {
                        break;
                    }

                    //
                    // read the JSON object from the socket
                    //
                    $json = fgets($fp);
                    if ( ($json !== false) && (strlen($json) > 0) )
                    {
                        //
                        // decode the socket to a PHP array
                        //
                        $data = json_decode($json, true);
                        if ($data)
                        {
                            //
                            // process it
                            //
                            $this->process_tweet($data);
                        }
                    }
                }
            }

            fclose($fp);
            sleep(10);
        }

        return;
    }
};

The “process_tweet()” method will be called for each matching tweet- just modify that method to process the tweet however you want (load it into a database, print it to screen, email it, etc). The keyword matching isn’t perfect- if you search for a string of words, it won’t necessarily match the words in that exact order, but you can check that yourself from the process_tweet() method.

Then create a simple PHP application to run the collector:

require 'ctwitter_stream.php';

$t = new ctwitter_stream();

$t->login('your twitter username', 'your twitter password');

$t->start(array('facebook', 'fbook', 'fb'));

Just provide your twitter account username/password, and then an array of keywords/strings to search for.

Since this application runs continuously in the background, it’s obviously not meant to be run via a web request, but meant to be run from the command line of your Unix or Windows box.

According to the Twitter documentation, the default access level allows up to 400 keywords, so you can track all sorts of things at the same time. If you need more details about the Twitter streaming API, it’s available here.

This class uses the HTTPS PHP stream– so you’ll need the OpenSSL extension enabled for it to work.