Browse Source

Convert the whole site to use Pelican instead of Jekyll

Gergely Polonkai 8 months ago
No known key found for this signature in database GPG Key ID: 38F402C8471DDE93
100 changed files with 241 additions and 5620 deletions
  1. +0
  2. +2
  3. +0
  4. +0
  5. +0
  6. +0
  7. +0
  8. +74
  9. +14
  10. +151
  11. +0
  12. +0
  13. +0
  14. +0
  15. +0
  16. +0
  17. +0
  18. +0
  19. +0
  20. +0
  21. +0
  22. +0
  23. +0
  24. +0
  25. +0
  26. +0
  27. +0
  28. +0
  29. +0
  30. +0
  31. +0
  32. +0
  33. +0
  34. +0
  35. +0
  36. +0
  37. +0
  38. +0
  39. +0
  40. +0
  41. +0
  42. +0
  43. +0
  44. +0
  45. +0
  46. +0
  47. +0
  48. +0
  49. +0
  50. +0
  51. +0
  52. +0
  53. +0
  54. +0
  55. +0
  56. +0
  57. +0
  58. +0
  59. +0
  60. +0
  61. +0
  62. +0
  63. +0
  64. +0
  65. +0
  66. +0
  67. +0
  68. +0
  69. +0
  70. +0
  71. +0
  72. +0
  73. +0
  74. +0
  75. +0
  76. +0
  77. +0
  78. +0
  79. +0
  80. +0
  81. +0
  82. +0
  83. +0
  84. +0
  85. +0
  86. +0
  87. +0
  88. +0
  89. +0
  90. +0
  91. +0
  92. +0
  93. +0
  94. +0
  95. +0
  96. +0
  97. +0
  98. +0
  99. +0
  100. +0

+ 0
- 3
.bundle/config View File

@@ -1,3 +0,0 @@
BUNDLE_PATH: "vendor"

+ 2
- 4
.gitignore View File

@@ -1,4 +1,2 @@

+ 0
- 2
.hyde.el View File

@@ -1,2 +0,0 @@
(setq hyde/git/remote "origin"
hyde/git/remote-branch "master")

+ 0
- 11 View File

@@ -1,11 +0,0 @@
layout: page
title: Not Found
permalink: /404.html

The page you are looking for is not here. Maybe it was but I have removed it. Most likely it was intentionally. If you think I made a mistake, please tell me.

{% if page.url contains '/akarmi' %}
If you are looking for the pictures that used to be here, you should definitely contact me. For reasons.
{% endif %}

+ 0
- 1
CNAME View File

@@ -1 +0,0 @@

+ 0
- 5
Gemfile View File

@@ -1,5 +0,0 @@
source ''

gem 'jekyll'
gem 'jekyll-gist'
gem 'jekyll-paginate'

+ 0
- 78
Gemfile.lock View File

@@ -1,78 +0,0 @@
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
colorator (1.1.0)
concurrent-ruby (1.1.5)
em-websocket (0.5.1)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
eventmachine (1.2.7)
faraday (0.17.0)
multipart-post (>= 1.2, < 3)
ffi (1.11.1)
forwardable-extended (2.6.0)
http_parser.rb (0.6.0)
i18n (1.7.0)
concurrent-ruby (~> 1.0)
jekyll (4.0.0)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (>= 0.9.5, < 2)
jekyll-sass-converter (~> 2.0)
jekyll-watch (~> 2.0)
kramdown (~> 2.1)
kramdown-parser-gfm (~> 1.0)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (~> 3.0)
safe_yaml (~> 1.0)
terminal-table (~> 1.8)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-paginate (1.1.0)
jekyll-sass-converter (2.0.1)
sassc (> 2.0.1, < 3.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
kramdown (2.1.0)
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.3)
listen (3.2.0)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.3.6)
multipart-post (2.1.1)
octokit (4.14.0)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (4.0.1)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
ffi (~> 1.0)
rouge (3.12.0)
safe_yaml (1.0.5)
sassc (2.2.1)
ffi (~> 1.9)
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
unicode-display_width (1.6.0)




+ 74
- 0
Makefile View File

@@ -0,0 +1,74 @@


DEBUG ?= 0
ifeq ($(DEBUG), 1)

ifeq ($(RELATIVE), 1)
PELICANOPTS += --relative-urls

@echo 'Makefile for a pelican Web site '
@echo ' '
@echo 'Usage: '
@echo ' make html (re)generate the web site '
@echo ' make clean remove the generated files '
@echo ' make regenerate regenerate files upon modification '
@echo ' make publish generate using production settings '
@echo ' make serve [PORT=8000] serve site at http://localhost:8000'
@echo ' make serve-global [SERVER=] serve (as root) to $(SERVER):80 '
@echo ' make devserver [PORT=8000] serve and regenerate together '
@echo ' make ssh_upload upload the web site via SSH '
@echo ' make rsync_upload upload the web site via rsync+ssh '
@echo ' '
@echo 'Set the DEBUG variable to 1 to enable debugging, e.g. make DEBUG=1 html '
@echo 'Set the RELATIVE variable to 1 to enable relative urls '
@echo ' '


[ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)


ifdef PORT

ifdef SERVER

ifdef PORT


.PHONY: html help clean regenerate serve serve-global devserver publish

+ 14
- 0
Pipfile View File

@@ -0,0 +1,14 @@
name = "pypi"
url = ""
verify_ssl = true


pelican = {extras = ["markdown"],version = "*"}
gergelypolonkaieu-site = {editable = true,path = "."}
typogrify = "*"

python_version = "3.7"

+ 151
- 0
Pipfile.lock View File

@@ -0,0 +1,151 @@
"_meta": {
"hash": {
"sha256": "3848a327090b82fa6faf252335283a4c4648c0848fcf02cd841428b45a36c238"
"pipfile-spec": 6,
"requires": {
"python_version": "3.7"
"sources": [
"name": "pypi",
"url": "",
"verify_ssl": true
"default": {
"blinker": {
"hashes": [
"version": "==1.4"
"docutils": {
"hashes": [
"version": "==0.15.2"
"feedgenerator": {
"hashes": [
"version": "==1.9"
"gergelypolonkaieu-site": {
"editable": true,
"path": "."
"jinja2": {
"hashes": [
"version": "==2.10.3"
"markdown": {
"hashes": [
"version": "==3.1.1"
"markupsafe": {
"hashes": [
"version": "==1.1.1"
"pelican": {
"extras": [
"hashes": [
"index": "pypi",
"version": "==4.2.0"
"pygments": {
"hashes": [
"version": "==2.4.2"
"python-dateutil": {
"hashes": [
"version": "==2.8.1"
"pytz": {
"hashes": [
"version": "==2019.3"
"six": {
"hashes": [
"version": "==1.12.0"
"smartypants": {
"hashes": [
"version": "==2.0.1"
"typogrify": {
"hashes": [
"index": "pypi",
"version": "==2.0.7"
"unidecode": {
"hashes": [
"version": "==1.1.1"
"develop": {}

+ 0
- 10 View File

@@ -1,10 +0,0 @@

## Initial start

git clone $REPO
cd $REPO
bundle install --path vendor/bundle
bundle exec jekyll server

+ 0
- 18
_config.yml View File

@@ -1,18 +0,0 @@
# Site settings
title: Gergely Polonkai
description: "developer, systems engineer and administrator"
url: ""
timezone: Europe/Budapest
name: Gergely Polonkai
paginate: 10
paginate_path: "/blog/page/:num"
exclude: ['', 'Gemfile', 'Gemfile.lock', 'CNAME', ".hyde.el", "vendor"]
include: ['.well-known']
- jekyll-gist
- jekyll-paginate

# Build settings
markdown: kramdown
permalink: pretty

+ 0
- 55
_data/contacts.yaml View File

@@ -1,55 +0,0 @@
- text: E-mail
image: email.png
icon: envelope-o
- text: Stack Exchange
image: stackexchange.png
icon: stack-exchange
- text: LinkedIn
image: linkedin.png
icon: linkedin
- text: Skype
link: skype:gergely.polonkai
image: skype.png
icon: skype
- text: Facebook
image: facebook.png
icon: facebook
- text: Google+
image: google_plus.png
icon: google-plus
- text: Twitter
image: twitter.png
icon: twitter
- text: Tumblr
image: tumblr.png
icon: tumblr
- text: deviantArt
image: deviantart.png
icon: deviantart
- text: Hashnode
image: hashnode.png
- text: Keybase
image: keybase.png
icon: keybase
- text: Liberapay
image: liberapay.png
icon: liberapay
- text: Mastodon
image: mastodon.png
icon: mastodon
- text: Pay me a coffee
image: paypal.png
icon: paypal

+ 0
- 1738
File diff suppressed because it is too large
View File

+ 0
- 5
_drafts/ View File

@@ -1,5 +0,0 @@
``` lisp
(defun cut-at-ten ()
(while (re-search-forward "," (save-excursion (end-of-line) (point)) t 10)

+ 0
- 15
_drafts/ View File

@@ -1,15 +0,0 @@
layout: post
title: "GtkActionable in action"
name: "Gergely Polonkai"
email: ""

I have seen several people (including myself) struggling with
disabling/enabling menu items, toolbar buttons and similar UI
interfaces based on different conditions. It gets even worse if there
are multiple representations of the same action in the same
application, e.g. a menu item and a toolbar button exists for the same
action. But with GTK+ 3.4, we have GtkAction, which is exactly for
this kind of situations.

+ 0
- 17
_drafts/ View File

@@ -1,17 +0,0 @@
layout: post
title: "Measuring code coverage with codecov for libtool projects"
name: "Gergely Polonkai"
email: ""

I have recently found [codecov][]; they offer free
services for public GitHub projects. As I have recently started writing
tests for my SWE-GLib project, I decided to give it a go. Things are not
this easy if you use GNU Autotools and libtool, though…

The problem here is that these tools generate output under `src/.libs/`
(given that your sources are under `src/`) and `gcov` has hard times
finding the coverage data files. Well, at least in the codecov
environment, it works fine on my machine.

+ 0
- 326
_drafts/ View File

@@ -1,326 +0,0 @@
layout: post
title: "Writing a GNOME Shell extension"

I could not find a good tutorial on how to write a GNOME Shell
extension. There is a so called step by step
[instruction list](
on how to do it, but it has its flaws, including grammar and clearance.
As I wanted to create an extension for my SWE GLib library to display
the current position of some planets, I dug into existing (and working)
extensions’ source code and made up something. Comments welcome!


GNOME Shell extensions are written in JavaScript and are interpreted
by [GJS]( Using
introspected libraries from JavaScript is not a problem for me (see
SWE GLib’s
[Javascript example](;
it’s not beautiful, but it’s working), but wrapping your head around
the Shell’s concept can take some time.

The Shell is a Clutter stage, and all the buttons (including the
top-right “Activities” button) are actors on this stage. You can add
practically anything to the Shell panel that you can add to a Clutter

The other thing to remember is the lifecycle of a Shell
extension. After calling `init()`, there are two ways forward: you
either use a so called extension controller, or plain old JavaScript
functions `enable()` and `disable()`; I will go on with the former
method for reasons discussed later.

If you are fine with the `enable()`/`disable()` function version, you
can ease your job with the following command:

gnome-shell-extension-tool --create-extension

This will ask you a few parameters and create the necessary files for
you. On what these parameters should look like, please come with me to
the next section.

## Placement and naming

Extensions reside under `$HOME/.local/share/gnome-shell/extensions`,
where each of them have its own directory. The directory name has to be
unique, of course; to achieve this, they are usually the same as the
UUID of the extension.

The UUID is a string of alphanumeric characters, with some extras added.
Generally, it should match this regular expression:
`^[-a-zA-Z0-9@._]+$`. The convention is to use the form
`extension-name@author-id`, e.g. ``. Please
[this link](
for some more information about this.

## Anatomy of an extension

Extensions consist of two main parts, `metadata.json` and

The `metadata.json` file contains compatibility information and, well,
some meta data:

"shell-version": ["3.18"],
"uuid": "",
"name": "Planets",
"description": "Display current planet positions"

Here, `shell-version` must contain all versions of GNOME Shell that is
known to load and display your extension correctly. You can insert minor
versions here, like I did, or exact version numbers, like `3.18.1`.

In the `extension.js` file, which contains the actual extension code,
the only thing you actually need is an `init()` function:

function init(extensionMeta) {
// Do whatever it takes to initialize your extension, like
// initializing the translations. However, never do any widget
// magic here yet.

// Then return the controller object
return new ExtensionController(extensionMeta);

## Extension controller

So far so good, but what is this extension controller thing? It is an
object which is capable of managing your GNOME Shell extension. Whenever
the extension is loaded, its `enable()` method is called; when the
extension is unloaded, you guessed it, the `disable()` method gets

function ExtensionController(extensionMeta) {
return {
extensionMeta: extensionMeta,
extension: null,

enable: function() {
this.extension = new PlanetsExtension(this.extensionMeta);

0, "right");

disable: function() {;

this.extension = null;

This controller will create a new instance of the `PlanetsExtension`
class and add it to the panel’s right side when loaded. Upon
unloading, the extension’s actor gets destroyed (which, as you will
see later, gets created behind the scenes, not directly by us),
together with the extension itself. Also, for safety measures, the
extension is set to `null`.

## The extension

The extension is a bit more tricky, as, for convenience reasons, it
should extend an existing panel widget type.

function PlanetsExtension(extensionMeta) {

PlanetsExtension.prototype = {
__proto__ = PanelMenu.Button.prototype,

_init: function(extensionMeta) {, 0.0);

this.extensionMeta = extensionMeta;

this.panelContainer = new St.BoxLayout({style_class: 'panel-box'});;'panel-status-button');

this.panelLabel = new St.Label({
text: 'Loading',
y_align: Clutter.ActorAlign.CENTER


Here we extend the Button class of panelMenu, so we will be able to do
some action upon activate.

The only parameter passed to the parent’s `_init()` function is
`menuAlignment`, with the value `0.0`, which is used to position the
menu arrow. (_Note: I cannot find any documentation on this, but it
seems that with the value `0.0`, a menu arrow is not added._)

The extension class in its current form is capable of creating the
actual panel button displaying the text “Loading” in its center.

## Loading up the extension

Now with all the necessary import lines added:

// The PanelMenu module that contains Button
const PanelMenu = imports.ui.panelMenu;
// The St class that contains lots of UI functions
const St =;
// Clutter, which is used for displaying everything
const Clutter =;

As soon as this file is ready, you can restart your Shell (press
Alt-F2 and enter the command `r`), and load the extension with
e.g. the GNOME Tweak Tool. You will see the Planets button on the
right. This little label showing the static text “Planets”, however,
is pretty boring, so let’s add some action.

## Adding some periodical change

Since the planets’ position continuously change, we should update our
widget every minute or so. Let’s patch our `_init()` a bit:

this.last_update = 0;

MainLoop.timeout_add(1, Lang.bind(this, function() {
this.panelLabel.set_text("Update_count: " + this.last_update);

This, of course, needs a new import line for `MainLoop` to become available:

const MainLoop = imports.mainloop;
const Lang = imports.lang;

Now if you restart your Shell, your brand new extension will increase
its counter every second. This, however, presents some problems.

SWE GLib queries can sometimes be expensive, both in CPU and disk
operations, so updating our widget every second may present problems.
Also, planets don’t go **that** fast. We may update our timeout value
from `1` to `60` or something, but why don’t just give our user a chance
to set it?

## Introducing settings

Getting settings from `GSettings` is barely straightforward, especially
for software installed in a non-GNOME directory (which includes
extensions). To make our lives easier, I copied over a
[convenience library](
from the [Hamster project](’s
extension, originally written by Giovanni Campagna. The relevant
function here is `getSettings()`:

* getSettings:
* @schema: (optional): the GSettings schema id
* Builds and return a GSettings schema for @schema, using schema files
* in extensionsdir/schemas. If @schema is not provided, it is taken from
* metadata['settings-schema'].
function getSettings(schema) {
let extension = ExtensionUtils.getCurrentExtension();

schema = schema || extension.metadata['settings-schema'];

const GioSSS = Gio.SettingsSchemaSource;

// check if this extension was built with "make zip-file", and thus
// has the schema files in a subfolder
// otherwise assume that extension has been installed in the
// same prefix as gnome-shell (and therefore schemas are available
// in the standard folders)
let schemaDir = extension.dir.get_child('schemas');
let schemaSource;
if (schemaDir.query_exists(null))
schemaSource = GioSSS.new_from_directory(schemaDir.get_path(),
schemaSource = GioSSS.get_default();

let schemaObj = schemaSource.lookup(schema, true);
if (!schemaObj)
throw new Error('Schema ' + schema + ' could not be found for extension '
+ extension.metadata.uuid + '. Please check your installation.');

return new Gio.Settings({ settings_schema: schemaObj });

You can either incorporate this function into your `extension.js` file,
or just use `convenience.js` file like I (and the Hamster applet) did
and import it:

const ExtensionUtils = imports.misc.extensionUtils;
const Me = ExtensionUtils.getCurrentExtension;
const Convenience = Me.imports.convenience;

Now let’s create the settings definition. GSettings schema files are XML
files. We want to add only one settings for now, the refresh interval.

<?xml version="1.0" encoding="utf-8"?>
<schema id="" path="/org/gnome/shell/extensions/planets/">
<key name="refresh-interval" type="i">
<summary>Refresh interval of planet data</summary>
<description>Interval in seconds. Sets how often the planet positions are recalculated. Setting this too low (e.g. below 30) may raise performance issues.</description>
you need to compile these settings with

glib-compile-schemas --strict schemas/

Now let’s utilize this new setting. In the extension’s `_init()`
function, add the following line:

this._settings = Convenience.getSettings();

And, for `getSettings()` to work correctly, we also need to extend our
`metadata.json` file:

"settings-schema": "planets"

After another restart (please, GNOME guys, add an option to reload
extensions!), your brand new widget will refresh every 30 seconds.

## Displaying the planet positions

## The settings panel

## Start an application

+ 0
- 67
_drafts/ View File

@@ -1,67 +0,0 @@
layout: post
title: "Lessens you learn while writing an SDK"
date: 2016-03-19 12:34:56
tags: [development]
published: false
name: Gergely Polonkai

In the last few months I’ve been working on a GLib based SDK for
client applications that want to communicate with a

For whoever doesn’t know it, Matrix is a decentralized network of
servers (Homeservers). Clients can connect to them via HTTP and send
messages (events, in Matrix terminology) to each other. They are
called events because these messages can be pretty much anything from
instant messages through automated notifications to files or, well,
actual events (such as a vCalendar); anything that you can serialize
to JSON can go through this network.

My original intention was to integrate Matrix based chat into
Telepathy, a DBus based messaging framework used by e.g. the GNOME
desktop (more specifically Empathy, GNOME's chat client.) After
announcing my plans among the Matrix devs, I quickly learned some

1. they are more than open to any development ideas
1. they really wanted to see this working
1. they would have been happy if there were a GLib or Qt based SDK

With my (far from complete) knowledge in GLib I decided to move on
with this last point, hoping that it will help me much when I finally
implement the Telepathy plugin.

## Matrix devs are open minded

What I learned very quickly is that Matrix devs are very open minded
folks from different parts of the world. They are all individuals with
their own ideas, experiences and quirks, yet, when it comes to that,
they steer towards their goals as a community. Thus, getting
additional information from them while reading the spec was super

## The specification is easy to understand

Except when it is not. For these cases, see the previous point.

Jokes asidu, anyone who worked with communications protocols or JSON
APIs before can get along with it fast. The endpoints are all
documented, and if something is unclear, they are happy to help
(especially if you patch up the spec afterwards.)

## Copying the SDK for a different language is not (always) what you want

I started my SDK in C, trying to mimic the Python SDK. This was a
double fail: the Python SDK was a volatile WiP, and C and Python are
fundamentally different.

During the upcoming weeks this became clear and I switched to the Vala
language. It is much easier to write GObject based stuff in Vala,
although I had to fall back to C to get some features working. I also
planned and implemented a more object oriented API, which is easier to
use in the GObject world.

+ 0
- 27
_includes/about.html View File

@@ -1,27 +0,0 @@
Gergely Polonkai is a systems engineer of a telco company, and
also a freelancer self- and software developer.

He is learning about different IT subjects since the late
1990s. These include web development, application building,
systems engineering, IT security and many others. He also dug his
nose deeply into free software, dealing with different types of
Linux and its applications,
while also writing and contributing to some open source projects.

On this site he is writing posts about different stuff he faces
during work (oh my, yet another IT solutions blog), hoping they
can help others with their job, or just to get along with their
brand new netbook that shipped with Linux.

“I believe one can only achieve success if they follow their own
instincts and listen to, but not bend under others’ opinions. If
you change your course just because someone says so, you are
following their instincts, not yours.”

+ 0
- 46
_includes/blog-post.html View File

@@ -1,46 +0,0 @@
<article class="{% if page.post_listing %}col-sm-5 col-md-6 {% endif%}post">
{% if page.post_listing %}
<ul class="list-inline">
<li class="col-md-8">
{% endif %}
<header class="post-header">
{% if page.tag %}
{% else %}
{% endif %}
{% if page.post_listing %}
<a href="{{ post.url }}">
{% endif %}
{{ post.title }}
{% if page.post_listing %}
{% endif %}
{% if page.tag %}
{% else %}
{% endif %}
<div class="meta pull-left">
<div class="meta pull-right">
{{ | date: "%b %-d, %Y :: %H:%M"}}
<div class="clearfix"></div>

{% if layout.render_post %}
{% else %}
{% endif %}

{% include tag-link.html %}
{% if layout.post_listing %}
{% endif %}

+ 0
- 16
_includes/head.html View File

@@ -1,16 +0,0 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="keywords" content="{{page.keywords}}">
<meta name="description" content="Personal page of Gergely Polonkai">
<title>Gergely Polonkai{% if page.title %}: {{page.title}}{% endif %}</title>

<link rel="icon" type="image/x-icon" href="{% link favicon.ico %}">
<link href=",300,300italic,400italic,600,600italic,700,700italic,800,800italic" rel="stylesheet" type="text/css">
<link rel="alternate" type="application/rss+xml" title="Gergely Polonkai's Blog - RSS Feed" href="{{site.url}}/blog/atom.xml">
<link rel="stylesheet" type="text/css" href="">
<link rel="stylesheet" href="{% link css/style.sass %}">
<link href="" rel="stylesheet"/>

<script type="text/javascript" src="//"></script>
<script src=""></script>
<script src=""></script>

+ 0
- 35
_includes/header.html View File

@@ -1,35 +0,0 @@
<div class="navbar navbar-inverse navbar-fixed-top">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#gp-navbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<a class="navbar-brand" href="{% link index.html %}"><img src="{% link images/profile.svg %}" alt="Gergely Polonkai" style="background-color: white; height: 45px; margin-top: -13px;"></a>
{% if page.url != '/' %}
<a class="navbar-brand" href="{% link index.html %}">Gergely Polonkai</a>
{% endif %}
<div class="collapse navbar-collapse" id="gp-navbar">
<ul class="nav navbar-nav">
<li><a href="{% link blog/index.html %}">Blog</a></li>
<li><a href="{% link resume.html %}">Resume</a></li>
<li><a href="{% link stories/index.html %}">Stories</a></li>
<ul class="nav navbar-nav navbar-right">
<li><a href=""></a></li>
<li><a href="{% link %}">Disclaimer</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><span class="glyphicon glyphicon-pencil"></span> Contact me <span class="caret"></span></a>
<ul class="dropdown-menu" role="menu">
{% for contact in %}
<li><a href="{{}}" target="_blank"><i class="fa fa-{{ contact.icon }}"></i> <img src="{% link images/contact/index.html %}{{ contact.image }}" alt="" /> {{contact.text}}</a></li>
{% endfor %}
<li><a href="{% link blog/atom.xml %}"><img src="{% link images/contact/index.html %}feed.png" alt="" /> RSS Feed</a></li>

+ 0
- 17
_includes/pagination.html View File

@@ -1,17 +0,0 @@
<ul class="pagination">
<li{% if paginator.previous_page == null %} class="disabled"{% endif %}>
<a href="{{ paginator.previous_page_path | replace: '//', '/'}}" aria-label="Previous page">
<span aria-hidden="true">&laquo;</span>
{% for page in (1..paginator.total_pages) %}
<li{% if == page %} class="active"{% endif %}><a href="{% if page == 1 %}{% link blog/index.html %}{% else %}{{ site.paginate_path | replace: '//', '/' | replace: ':num', page }}{% endif %}">{{page}}</a></li>
{% endfor %}
<li{% if paginator.next_page == null %} class="disabled"{% endif %}>
<a href="{{paginator.next_page_path | replace: '//', '/'}}" aria-label="Next page">
<span aria-hidden="true">&raquo;</span>

+ 0
- 9
_includes/post-list.html View File

@@ -1,9 +0,0 @@
<div class="container-fluid">
{% for post in posts limit: post_limit %}
{% capture counter %}{% cycle 'odd', 'even' %}{% endcapture %}
{% include blog-post.html %}
{% if counter == 'even' %}
<div class="clearfix"></div>
{% endif %}
{% endfor %}

+ 0
- 4
_includes/read_time.html View File

@@ -1,4 +0,0 @@
<span class="reading time" title="Estimated reading time">
{% assign words = content | number_of_words %}
{% if words < 360 %}1 minute{% else %}{{ words | divided_by:180 }} minutes{% endif %} read

+ 0
- 11
_includes/tag-link.html View File

@@ -1,11 +0,0 @@
{% capture tagsize %}{{post.tags | size}}{% endcapture %}
{% if tagsize != '0' %}
<p class="article-tags">
{% for tag in post.tags %}
<a href="{% link blog/tag/index.html %}{{ tag }}" class="tag-label">{{tag}}</a>
{% endfor %}
<br class="clearfix">
{% endif %}

+ 0
- 115
_layouts/default.html View File

@@ -1,115 +0,0 @@
<!DOCTYPE html>
{% include head.html %}
{% include header.html %}
<div class="container" id="main-container">


{% if != 'about.html' %}
<div class="well well-sm small">
<div class="pull-left" id="about-well-image">
<a href="{% link about.html %}">
<img src="{% link images/profile.svg %}" alt="">
{% include about.html %}
<div class="clearfix"></div>
{% endif %}
<script type="text/javascript">
$(document).ready(function() {
$('#tagcloud-button').click(function() {

jQuery.extend_if_has = function(desc, source, array) {
for (var i=array.length;i--;) {
if (typeof source[array[i]] != 'undefined') {
desc[array[i]] = source[array[i]];
return desc;

(function($) {
$.fn.tilda = function(eval, options) {
if ($('body').data('tilda')) {
return $('body').data('tilda').terminal;
options = options || {};
eval = eval || function(command, term) {
term.echo("you don't set eval for tilda");
var settings = {
prompt: '> ',
name: 'tilda',
height: 400,
enabled: false,
greetings: 'Welcome to my Terminal. Type `help\' to list the available commands.\n\nPowered by',
keypress: function(e) {
if (e.which == 96) {
return false;
if (options) {
$.extend(settings, options);
this.append('<div class="td"></div>');
var self = this;
self.terminal = this.find('.td').terminal(eval, settings);
var focus = false;
$(document.documentElement).keypress(function(e) {
if (e.which == 96) {
self.terminal.focus(focus = !focus);
scrollTop: self.terminal.attr("scrollHeight")
$('body').data('tilda', this);
return self;

String.prototype.strip = function(char) {
return this.replace(new RegExp("^\\s*"), '')
.replace(new RegExp("\\s*$"), '');

jQuery(document).ready(function($) {
$('#tilda').tilda(function(command, terminal) {
command = command.strip();

switch (command) {
case 'help':
terminal.echo('about - Go to the about page');
terminal.echo(' ');
terminal.echo('More commands will follow soon!');

case 'about':
location = '{% link about.html %}';

terminal.echo(command + ': command not found');

<div id="tilda"></div>

+ 0
- 15
_layouts/page.html View File

@@ -1,15 +0,0 @@
layout: default
<div class="post">

<header class="post-header">
<div class="clearfix"></div>

<article class="post-content">


+ 0
- 16
_layouts/post.html View File

@@ -1,16 +0,0 @@
layout: default
render_post: true
{% assign post = page %}
{% include blog-post.html %}
<ul class="pager">
{% if page.previous %}
<li class="previous"><a href="{{ page.previous.url }}">&larr; {{page.previous.title}}</a></li>
{% endif %}
{% if %}
<li class="next"><a href="{{ }}">{{}} &rarr;</a></li>
{% endif %}

+ 0
- 15
_layouts/posts-by-tag.html View File

@@ -1,15 +0,0 @@
layout: default
post_listing: true
<h3 class="tag">{{ page.tag }}</h3>

<h4>Articles under this tag</h4>

{% if site.tags[page.tag] %}
{% assign posts = site.tags[page.tag] %}
{% include post-list.html %}
{% else %}
No posts with this tag.
{% endif %}

+ 0
- 8
_layouts/story.html View File

@@ -1,8 +0,0 @@
layout: default
{{ page.title }}<br>
<small>{% include read_time.html %}</small>
{{ content }}

+ 0
- 43 View File

@@ -1,43 +0,0 @@
#! /bin/sh
# Find all tags in all posts under _posts, and generate a file for
# each under blog/tag. Also, if a tag page does not contain the tag:
# or layout: keywords, the script will include them in the front
# matter.


for tag in `grep -h ^tags: _posts/* | sed -re 's/^tags: +\[//' -e 's/\]$//' -e 's/, /\n/g' | sort | uniq`
echo -n "[$tag] "

if [ ! -f $tag_file ]
echo "creating ($tag_file)"

cat <<EOF > $tag_file
layout: $layout
tag: $tag
if ! egrep "^tag: +${tag}$" $tag_file 2>&1 > /dev/null; then
echo "adding tag"
sed -i "0,/---/! s/---/tag: $tag\\n---/" $tag_file

if ! egrep "^layout: +" $tag_file 2>&1 > /dev/null; then
echo "adding layout"
sed -i "0,/---/! s/---/layout: $layout\\n---/" $tag_file

if [ $updated = 0 ]; then
echo ""

+ 0
- 29
_posts/2011-05-12-ethical-hacking-2011.markdown View File

@@ -1,29 +0,0 @@
layout: post
title: "Ethical Hacking 2012"
date: 2011-05-12 20:54:42
tags: [conference]
permalink: /blog/2011/5/12/ethical-hacking-2011
published: true
name: Gergely Polonkai

Today I went to the Ethical Hacking conference with my boss. It was my first
appearance at such conferences, but I hope there will be more. Although we
just started to redesign our IT security infrastructure with a 90% clear goal,
it was nice to hear that everything is vulnerable. I was thinking if we should
sell all our IT equipments, fire all our colleagues (you know, to prevent
social engineering), and move to the South Americas to herd llamas or sheep,
so the only danger would be some lurking pumas or jaguars. Or I simply leave
my old background image on my desktop, from the well-known game, which says:
Trust is a weakness.

Anyways, the conference was really nice. We heard about the weaknesses of
Android, Oracle, and even FireWire. They showed some demos about everything,
exploited some free and commercial software with no problem at all. We have
seen how much power the virtualisation admin has (although I think it can be
prevented, but I’m not sure yet). However, in the end, we could see that the
Cloud is secure (or at least it can be, in a few months or so), so I’m not
totally pessimistic. See you next time at Hacktivity!

+ 0
- 88
_posts/2011-05-12-gentoo-hardened-desktop-with-gnome-3-round-one.markdown View File

@@ -1,88 +0,0 @@
layout: post
title: "Gentoo hardened desktop with GNOME 3 – Round one"
date: 2011-05-12 20:32:41
tags: [gentoo, gnome3, selinux]
permalink: /blog/2011/5/12/gentoo-hardened-desktop-with-gnome-3-round-one
published: true
name: Gergely Polonkai

After having some hard times with Ubuntu (upgrading from 10.10 to 11.04), I
decided to switch back to my old friend, Gentoo. As I’m currently learning
about Linux hardening, I decided to use the new SELinux profile, which
supports the v2 reference policy.

Installation was pretty easy, using the [Gentoo x86
Handbook]( This profile
automatically turns on the `USE=selinux` flag (so does the old SELinux
profile), but deprecated `FEATURE=loadpolicy` (which is turned on by the
profile, so portage will complain about it until you disable it in

For the kernel, I chose `hardened-sources-2.6.37-r7`. This seems to be recent
enough for my security testing needs. I turned on both SELinux, PaX and
grsecurity. So far, I have no problem with it, but I don’t have X installed
yet, which will screw up things for sure.

After having those hard times with Ubuntu mentioned before, I decided not to
install Grub2 yet, as it renders things unusable (eg. my Windows 7
installation, which I sometimes need at the office). So I installed Grub 0.97
(this is the only version marked as stable, as I remember), touched
`/.autorelabel`, and reboot.

My first mistake was using an UUID as the root device on the kernel parameter
list (I don’t want to list all the small mistakes like forgetting to include to
correct SATA driver from my kernel and such). Maybe I was lame, but after
including `/dev/sda5` instead of the UUID thing, it worked like…

Well, charm would not be the good word. For example, I forgot to install the
lvm2 package, so nothing was mounted except my root partition. After I
installed it with the install CD, I assumed everything will be all right, but
I was wrong.

udev and LVM is a critical point in a hardened environment. udev itself
doesn’t want to work without the `CONFIG_DEVFS_TEMPFS=y` kernel option, so I
also had to change that. It seemed that it can be done without the install CD,
as it compiled the kernel with no problems. However, when it reached the point
when it compresses the kernel with gzip, it stopped with a `Permission denied`
message (although it was running with root privileges).

The most beautiful thing in the hardened environment with Mandatory Access
Control enabled) is that root is not a real power user any more by default.
You can get this kind of messages many times. There are many tools to debug
these, I will talk about these later.

So, my gzip needed a fix. After digging a bit on the Internet, I found that
the guilty thing is text relocation, which can be corrected if gzip is
compiled with PIC enabled. Thus, I turned on `USE=pic` flag globally, and
tried to remerge gzip. Of course it failed, as it had to use gzip to unpack
the gzip sources. So it did when I tried to install the PaX tools and gradm to
turn these checks off. The install CD came to the rescue again, with which I
successfully recompiled gzip, and with this new gzip, I compressed my new
kernel, with which udev started successfully. So far, so good, let’s try to

Damn, LVM is still not working. So I decided to finally consult the Gentoo
hardened guide. It says that the LVM startup scripts under `/lib/rcscripts/…`
must be modified, so LVM will put its lock files under `/etc/lvm/lock` instead
of `/dev/.lvm`. After this step and a reboot, LVM worked fine (finally).

The next thing was the file system labelling. SELinux should automatically
relabel the entire file system at boot time whenever it finds the
`/.autorelabel` file. Well, in my case it didn’t happen. After checking the
[Gentoo Hardening]( docs, I realised that the `rlpkg` program does exactly the same
(as far as I know, it is designed specifically for Gentoo). So I ran `rlpkg`,
and was kind of shocked. It says it will relabel ext2, ext3, xfs and JFS
partitions. Oh great, no ext4 support? Well, consulting the forums and adding
some extra lines to `/etc/portage/package.keywords` solved the problem (`rlpkg`
and some dependencies had to have the `~x86` keyword set). Thus, `rlpkg`
relabelled my file systems (I checked some directories with `ls -lZ`, it seemed
good for me).

Now it seems that everything is working fine, except the tons of audit
messages. Tomorrow I will check them with `audit2why` or `audit2allow` to see if
it is related with my SELinux lameness, or with a bug in the policy included
with Gentoo.

+ 0
- 35
_posts/2011-05-13-zabbix-performance-tip.markdown View File

@@ -1,35 +0,0 @@
layout: post
title: "Zabbix performance tip"
date: 2011-05-13 19:03:31
tags: [zabbix, monitoring]
permalink: /blog/2011/5/13/zabbix-performance-tip
published: true
name: Gergely Polonkai

Recently I have switched from [MRTG]( + [Cacti]( + [Nagios]( + [Gnokii]( to [Zabbix](, and I
must say I’m more than satisfied with it. It can do anything the former tools
did, and much more. First of all, it can do the same monitoring as Nagios did,
but it does much more fine. It can check several parameters within one
request, so network traffic is kept down. Also, its web front-end can generate
any kinds of graphs from the collected data, which took Cacti away. Also, it
can do SNMP queries (v1-v3), so querying my switches’ port states and traffic
made easy, taking MRTG out of the picture (I know Cacti can do it either, it
had historical reasons we had both tools installed). And the best part: it can
send SMS messages via a GSM modem natively, while Nagios had to use Gnokii.
The trade-off is, I had to install Zabbix agent on all my monitored machines,
but I think it worths the price. I even have had to install NRPE to monitor
some parameters, which can be a pain on Windows hosts, while Zabbix natively
supports Windows, Linux and Mac OS/X.

So I only had to create a MySQL database (which I already had for NOD32
central management), and install Zabbix server. Everything went fine, until I
reached about 1300 monitored parameters. MySQL seemed to be a bit slow on disk
writes, so my Zabbix “queue” filled up in no time. After reading some forums,
I decided to switch to PostgreSQL instead. Now it works like charm, even with
the default Debian settings. However, I will have to add several more
parameters, and my boss wants as many graphs as you can imagine, so I’m more
than sure that I will have to fine tune my database later.

+ 0
- 29
_posts/2011-05-18-gentoo-hardened-desktop-with-gnome-3-round-two.markdown View File

@@ -1,29 +0,0 @@
layout: post
title: "Gentoo hardened desktop with GNOME 3 – Round two"
date: 2011-05-18 10:28:14
tags: [gentoo, gnome3, selinux]
permalink: /blog/2011/5/18/gentoo-hardened-desktop-with-gnome-3-round-two
published: true
name: Gergely Polonkai

After several hours of `package.keywords`/`package.use` editing and package
compiling, I managed to install GNOME 3 on my notebook. Well, I mean, the
GNOME 3 packages. Unfortunately the fglrx driver didn’t seem to recognise my
ATI Mobility M56P card, and the open source driver didn’t want to give me GLX
support. When I finally found some clues on what should I do, I had to use my
notebook for work, so I installed Fedora 14 on it. Then I realised that GNOME
3 is already included in Rawhide (Fedora 15), so I quickly downloaded and
installed that instead. Now I have to keep this machine in a working state for
a few days, so I will learn SELinux stuff in its native environment.

When I installed Fedora 14, the first AVC message popped up after about ten
minutes. That was a good thing, as I wanted to see `setroubleshoot` in action.
However, in Fedora 15, the AVC bubbles didn’t show up even after a day. I
raised my left eyebrow and said that’s impossible, SELinux must be disabled.
And it’s not! It’s even in enforcing mode! And it works just fine. I like it,
and I hope I will be able to get the same results with Gentoo if I can get
back to testing…

+ 0
- 41
_posts/2011-05-27-citrix-xenserver-vs-debian-5-0-upgrade-to-6-0.markdown View File

@@ -1,41 +0,0 @@
layout: post
title: "Citrix XenServer 5.5 vs. Debian 5.0 upgrade to 6.0"
date: 2011-05-27 17:33:41
tags: [citrix-xenserver, debian]
permalink: /blog/2011/5/27/citrix-xenserver-vs-debian-5-0-upgrade-to-6-0
published: true
name: Gergely Polonkai

Few weeks ago I’ve upgraded two of our Debian based application servers from
5.0 to 6.0. Everything went fine, as the upgraded packages worked well with
the 4.2 JBoss instances. For the new kernel we needed a reboot, but as the
network had to be rebuilt, I postponed this reboot until the network changes.
With the network, everything went fine again, we successfully migrated our
mail servers behind a firewall. Also the Xen server (5.5.0, upgrade to 5.6
still has to wait for a week or so) revolted well with some storage disks
added. But the application servers remained silent…

After checking the console, I realised that they don’t have an active console.
And when I tried to manually start them, XenServer refused with a message
regarding pygrub.

To understand the problem, I had to understand how XenServer boots Debian. It
reads the grub.conf on the first partition’s root or `/boot` directory, and
starts the first option, without asking (correct me, if I’m mistaken
somewhere). However, this pygrub thing can not parse the new, grub2 config.
This is kinda frustrating.

For the first step, I quickly installed a new Debian 5.0 system from my
template. Then I attached the disks of the faulty virtual machine, and mounted
all its partitions. This way I could reach my faulty 6.0 system with a chroot
shell, from which I could install the `grub-legacy` package instead of grub,
install the necessary kernel and XenServer tools (which were missing from both
machines somehow), then halt the rescue system, and start up the original

Next week I will do an upgrade on the XenServer to 5.6.1. I hope no such
problems will occur.

+ 0
- 25
_posts/2011-05-27-oracle-database-incompatible-with-oracle-linux.markdown View File

@@ -1,25 +0,0 @@
layout: post
title: "Oracle Database “incompatible” with Oracle Linux?"
date: 2011-05-27 17:53:31
tags: [linux, oracle]
permalink: /blog/2011/5/27/oracle-database-incompatible-with-oracle-linux
published: true
name: Gergely Polonkai

Today I gave a shot to install [Oracle
Linux]( I thought I could easily install
an Oracle DBA on it. Well, I was naive.

As only the 5.2 version is supported by XenServer 5.5, I downloaded that
version of Oracle Linux. Installing it was surprisingly fast and easy, it
asked almost nothing, and booted without any problems.

After this came the DBA, 10.2, which bloated an error message in my face
saying that this is an unsupported version of Linux. Bah.

Is it only me, or is it really strange that Oracle doesn’t support their own

+ 0
- 22
_posts/2011-06-10-proxy-only-non-existing-files-with-mod-proxy-and-mod-rewrite.markdown View File

@@ -1,22 +0,0 @@
layout: post
title: "Proxy only non-existing files with mod_proxy and mod_rewrite"
date: 2011-06-10 14:20:43
tags: [apache]
permalink: /blog/2011/6/10/proxy-only-non-existing-files-with-mod-proxy-and-mod-rewrite
published: true
name: Gergely Polonkai

Today I got an interesting task. I had to upload some pdf documents to a site.
The domain is ours, but we don’t have access to the application server that is
hosting the page yet. Until we get it in our hands, I did a trick.

I enabled `mod_rewrite`, `mod_proxy` and `mod_proxy_http`, then added the following
lines to my apache config:

{% gist 47680bfa44eb29708f20 redirect-non-existing.conf %}

I’m not totally sure it’s actually secure, but it works for now.

+ 0
- 30
_posts/2011-09-18-inverse-of-sort.markdown View File

@@ -1,30 +0,0 @@
layout: post
title: "Inverse of `sort`"
date: 2011-09-18 14:57:31
tags: [linux, command-line]
permalink: /blog/2011/9/18/inverse-of-sort
published: true
name: Gergely Polonkai

I’m using \*NIX systems for about 14 years now, but it can still show me new
things. Today I had to generate a bunch of random names. I’ve create a small
perl script which generates permutations of some usual Hungarian first and
last names, occasionally prefixing it with a ‘Dr.’ title or using double first
names. For some reasons I forgot to include uniqueness check in the script.
When I ran it in the command line, I realized the mistake, so I appended
`| sort | uniq` to the command line. So I had around 200 unique names, but in
alphabetical order, which was awful for my final goal. Thus, I tried shell
commands like rand to create a random order, and when many of my tries failed,
the idea popped in my mind (not being a native English speaker): “I don’t have
to create «random order», but «shuffle the list». So I started typing `shu`,
pressed Tab in the Bash shell, and voilà! `shuf` is the winner, it does just
exactly what I need:

shuf - generate random permutations

Thank you, Linux Core Utils! :)

+ 0
- 16
_posts/2011-12-11-why-you-should-always-test-your-software-with-production-data.markdown View File

@@ -1,16 +0,0 @@
layout: post
title: "Why you should always test your software with production data"
date: 2011-12-11 12:14:51
tags: [development, testing, ranting]
permalink: /blog/2011/12/11/why-you-should-always-test-your-software-with-production-data
published: true
name: Gergely Polonkai

I’m writing a software for my company in PHP, using the Symfony 2 framework.
I’ve finished all the work, created some sample data, it loaded perfectly. Now
I put the whole thing into production and tried to upload the production data
into it. Guess what… it didn’t load.

+ 0
- 29
_posts/2012-03-20-php-5-4-released.markdown View File

@@ -1,29 +0,0 @@
layout: post
title: "PHP 5.4 released"
date: 2012-03-20 13:31:12
tags: [php]
permalink: /blog/2012/3/20/php-5-4-released
published: true
name: Gergely Polonkai

After a long time of waiting, PHP announced 5.4 release on 1 March (also,
today they announced that they finally migrate to Git, which is sweet from my
point of view, but it doesn’t really matter).

About a year ago we became very agressive towards a developer who created our
internal e-learning system. Their database was very insecure, and they didn’t
really follow industry standards in many ways. Thus, we forced them to move
from Windows + Apache 2.0 + PHP 5.2 + MySQL 4.0 to Debian Linux 6.0 + Apache
2.2 + PHP 5.3 + MySQL 5.1. It was fun (well, from our point of view), as their
coders… well… they are not so good. The code that ran “smoothly” on the
old system failed at many points on the new one. So they code and code, and
write more code. And they still didn’t finish. And now 5.4 is here. Okay, I
know it will take some time to get into the Debian repositories, but it’s
here. And they removed `register_globals`, which will kill that funny code again
at so many points that they will soon get to rewrite the whole code to make it
work. And I just sit here in my so-much-comfortable chair, and laugh. Am I

+ 0
- 34
_posts/2012-03-27-fast-world-fast-updates.markdown View File

@@ -1,34 +0,0 @@
layout: post
title: "Fast world, fast updates"
date: 2012-03-27 06:18:43
tags: [linux]
permalink: /blog/2012/3/27/fast-world-fast-updates
published: true
name: Gergely Polonkai

We live in a fast world, that’s for sure. When I first heard about Ubuntu
Linux and their goals, I was happy: they gave a Debian to everyone, but in
different clothes. It had fresh software in it, and even they gave support of
a kind. It was easy to install and use, even if one had no Linux experience
before. So people liked it. I’ve even installed it on some of my servers
because of the new package versions that came more often. Thus I got an up to
date system. However, it had a price. After a while, security updates came
more and more often, and when I had a new critical update every two or three
days, I’ve decided to move back to Debian. Fortunately I did this at the time
of a new release, so I didn’t really loose any features.

After a few years passed, even Debian is heading this very same way. But as I
see, the cause is not the same. It seems that upstream software is hitting
these bugs, and even the Debian guys don’t have the time to check for them. At
the time of a GNOME version bump (yes, GNOME 3 is a really big one for the
UN\*X-like OSes), when hundreds of packages need to be checked, security bugs
show off more often. On the other hand however, Debian is releasing a new
security update every day (I had one on each of the last three days). This, of
course, is good from one point of view as we get a system that is more secure,
but most administrators don’t have maintenance windows this often. I can think
of some alternatives like Fedora, but do I really have to change? Dear fellow
developers, please code more carefully instead!

+ 0
- 28
_posts/2012-06-14-wordpress-madness.markdown View File

@@ -1,28 +0,0 @@
layout: post
title: "Wordpress madness"
date: 2012-06-14 06:40:12
tags: [wordpress, ranting]
permalink: /blog/2012/6/14/wordpress-madness
published: true
name: Gergely Polonkai

I’m a bit fed up that I had to install [MySQL]( on my
server to have [Wordpress]( working, so I’ve Googled a
bit to find a solution for my pain. I found
[this]( I don’t know when
this post was written, but I think it’s a bit out of date. I mean come on, PDO
is the part of PHP for ages now, and they say adding a DBAL to the dependencies
would be a project as large as (or larger than) WP itself. Well,
yes, but PHP is already a dependency, isn’t it? Remove it guys, it’s too

Okay, to be serious… Having a heavily MySQL dependent codebase is a bad
thing in my opinion, and changing it is no easy task. But once it is done, it
would be a child’s play to keep it up to date, and to port WP to other
database backends. And it would be more than enough to call it 4.0, and
raising version numbers fast is a must nowadays (right, Firefox and Linux
Kernel guys?)

+ 0
- 28
_posts/2012-06-18-ssh-login-failed-on-red-hat-enterprise-linux-6-2.markdown View File

@@ -1,28 +0,0 @@
layout: post
title: "SSH login FAILed on Red Had Enterprise Linux 6.2"
date: 2012-06-18 18:28:45
tags: [linux, selinux, ssh, red-hat]
permalink: /blog/2012/6/18/ssh-login-failed-on-red-hat-enterprise-linux-6-2
published: true
name: Gergely Polonkai

Now this was a mistake I should not have done…

About a month ago I have moved my AWS EC2 machine from Amazon Linux to RHEL
6.2. This was good. I have moved all my files and stuff, recreated my own
user, everything was just fine. Then I copied my
[gitosis]( account (user `git` and its home
directory). Then I tried to log in. It failed. I was blaming OpenSSH for a week
or so, changed the config file in several ways, tried to change the permissions
on `~git/.ssh/*`, but still nothing. Permission were denied, I was unable to
push any of my development changes. Now after a long time of trying, I
coincidently `tail -f`-ed `/var/log/audit/audit.log` (wanted to open `auth.log`
instead) and that was my first good point. It told me that `sshd` was unable to
read `~git/.ssh/authorized_keys`, which gave me the idea to run `restorecon` on
`/home/git`. It solved the problem.

All hail SELinux and RBAC!

+ 0
- 35
_posts/2012-06-22-upgrades-requiring-a-reboot-on-linux-at-last.markdown View File

@@ -1,35 +0,0 @@
layout: post
title: "Upgrades requiring a reboot on Linux? At last!"
date: 2012-06-22 20:04:51
tags: [linux]
permalink: /blog/2012/6/22/upgrades-requiring-a-reboot-on-linux-at-last
published: true
name: Gergely Polonkai

I’ve recently received an article on Google+ about Fedora’s new idea: package
upgrades that require a reboot. The article said that Linux guys have lost
their primary adoo: “Haha! I don’t have to reboot my system to install system
upgrades!” My answer was always this: “Well, actually you should…”

I think this can be a great idea if distros implement it well. PackageKit was
a good first step on this road. That software could easily solve such an
issue. However, it is sooo easy to do it wrong. The kernel, of course, can not
be upgraded online (or could it be? I have some theories on this subject,
wonder if it can be implemented…), but other packages are much different.
From the users’ point of view the best would be if the packages would be
upgraded in the background seemlessly. E.g. PackageKit should check if the
given executable is running. If not, it should upgrade it, while notifying the
user like “Hey dude, don’t start Anjuta now, I’m upgrading it!”, or simply
denying to start it. Libraries are a bit different, as PackageKit should check
if any running executables are using the library. Meanwhile, PK should also
keep a notification somewhere telling the users that some packages could be
upgraded, but without stopping this-and-that, it can not be done.

I know these things are easier said than done. But I think (a) users should
tell such ideas to the developers and (b) developers (mostly large companies,
like Microsoft or Apple) should listen to them, and at least think of these
ideas. Some users are not as stupid as they think…

+ 0
- 80
_posts/2012-09-05-some-thoughts-about-that-dead-linux-desktop.markdown View File

@@ -1,80 +0,0 @@
layout: post
title: "Some thoughts about that dead Linux Desktop"
date: 2012-09-05 09:01:31
tags: [linux]
permalink: /blog/2012/9/5/some-thoughts-about-that-dead-linux-desktop
published: true
name: Gergely Polonkai

There were some arguments in the near past on [What Killed the Linux
Desktop]( After reading many
replies, like [Linus
I have my own thoughts, too.

I know my place in the world, especially in the online community. I’m a Linux
user for about 15 years and a Linux administrator for 10 years now, beginning
with WindowMaker and something that I remember as GNOME without a version
number. I have committed some minor code chunks and translations in some minor
projects, so I’m not really into it from the “write” side (well, until now,
since I have began to write this blog, and much more, but don’t give a penny
for my words until you see it).

I’m using Linux since 2.2 and GNOME since 1.whatever. It’s nice that a program
compiled years ago still runs on today’s Linux kernel, especially if you see
old DOS/Windows software failing to start on a new Windows 7 machine. I
understand Linus’ point that breaking external APIs is bad, and I think it can
work well on the kernel’s level. But the desktop level is much different. As
the Linux Desktop has such competitors (like OS/X and Windows’ Aero and Metro),
they have to give something new to the users almost every year to keep up with
them. Eye candies are a must (yes, of course my techy fellows, they are
worthless, but users *need* it), and they can not be created without extending
APIs. And the old API… well, it fades away fast. I don’t really understand
however, why they have to totally disappear, like
in Gtk3. It could be replaced with a 0 value (e.g: it won’t do anything). This
way my old Gtk2 program could compile with Gtk3 nicely. Also, there could be a
small software that goes through your source code and warn you about such
deprecated (and no-doer but still working) things. Porting applications between
Gtk (and thus, GNOME) versions became a real pain, which makes less enthusiast
programmers stop developing for Linux. Since I’m a GNOME guy for years, I can
tell nothing about Qt and KDE, but for the GNOME guys, this is a bad thing. As
of alternatives, there is Java. No, wait… it turned out recently that [it has
several security
Also it’s not that multiplatform as they say (I can’t find the article on
that at the moment, but I have proof). Also, the JVMs out there eat up so much
resources, which makes it a bit hard and expensive to use.

Also, I see another problem: those blasted package managers. RPM, DPKG,
Portage, whatever. What the hell? Why are there so many? Why do developers
reinvent the wheel? The nave is too small or there are to few spokes? Come on…
we live in an open source world! Contribute to the one and only package manager
(which one is that I don’t actually care)! I’m sure the two (three, many)
bunches of develoeprs could make a deal. Thus, it could become better and
“outsider” companies would be happier to distribute their software for Linux

And now that we get to the big companies. I don’t really understand them.
nVidia and ATI made their own closed source drivers for Linux. Some other
hardware vendors also write Linux drivers, and as the kernel API doesn’t really
change, they will work for a long time. But what about desktop
application vendors? Well, they try to stick to a desktop environment or two,
and if they change too frequently, they stop developing for Linux, like Skype
did (OK, maybe Skype has other reasons, but you see my point). But why? The
main part for Linux programs is the Linux kernel and the basic userland like
libc/stdlib++. If you write graphical software, it will have to use X-Windows.
Yes, it’s much different in many ways, mostly because they have a… well… pretty
ugly design by default. But still, it’s the same on every Linux distributions,
as it became somewhat an industry standard, as it was already on the market
back in the old UN\*X days. The protocol itself changed just like the Linux
kernel: almost no change at all, just some new features.

So what kills the Linux desktop in my opinion is these constant wars inside,
and the lack of support from the outside. Open Source is good, but until these
(mostly the first) problems are not resolved, Linux Desktop can do nothing on
the market. It’s a downward spiral hard to escape.

+ 0
- 76
_posts/2012-09-07-how-to-start-becoming-a-web-developer.markdown View File

@@ -1,76 +0,0 @@
layout: post
title: "How to start becoming a web developer"
date: 2012-09-07 18:12:12
tags: [development, technology]
permalink: /blog/2012/9/7/how-to-start-becoming-a-web-developer
published: true
name: Gergely Polonkai

A friend of mine asked me today how to become a web developer. It took me a
while, but I made up a checklist. It’s short, but it’s enough for the first

#### First of all, learn English

Well, if you read this, maybe this was a bad first point…

#### Choose a language and stick to it!

For the UN\*X/Linux line, there is PHP. It’s free, easy to learn, and has many
free tools and documentations available. It can be used in a functional or an
object-oriented way.

C# is another good way to start, but for the Windows line. It’s fully object-
oriented, and the web is full of tutorials, how-tos and other resources.

#### Learn the basics of the system you are working on

To become a good developer, learn at least the basics of the system you are
working on. Basic commands can always come in handy. Debugging (yes, you will
do tons of bugs for sure) can become much easier if you know the huge set of
tools provided by your OS. You should also try to develop in the chosen
environment. Chose PHP? Get a Linux desktop! ASP.NET? Get a Windows.
Everything will be much easier!

#### Learn the basics of the web server you are using

PHP can run on [Apache]( (as a module), or any
CGI-capable webserver, like [lighttpd]( or
[nginx]( (well, it can also run on IIS, but trust me: you
don’t want that). ASP.NET is designed for IIS, and although some scripts can
be run under a mono-capable server, it should still stay there.

Whichever you choose, learn the basics! How to start and stop the service,
basic configuration methods, modules/extensions, and so on. It’s more than sure
that you will face some issues while developing, so it can never hurt.

#### Keep your versions under control

Version control is critical nowadays. It gives you a basic backup solution,
can come in handy with debugging, and if you ever want to work in a team, you
will badly need it.

Subversion is a bit out of date now, and it’s kind of hard to set up.

Git is no easy. You will have to learn a lot of stuff, but basicly it’s just
another version control system. Just choose if you want to stick to
merge-then-commit or rebase-then-commit, get a client, and get on the run.

Microsoft’s Team Foundation is another good way if you are working in a team.
It provides several other features besides version controlling, and is well
integrated into Visual Studio, which is highly recommended for Windows based

#### Choose an environment to work in