Compare commits

..

1 Commits

5
.gitignore vendored

@ -1,3 +1,2 @@
__pycache__/
/output/
/gergelypolonkaieu_site.egg-info/
/Gemfile.lock
_site/

@ -0,0 +1,2 @@
(setq hyde/git/remote "origin"
hyde/git/remote-branch "master")

@ -0,0 +1,11 @@
---
layout: page
title: Not Found
permalink: /404.html
---
The page you are looking for is not here. Maybe it was but I have removed it. Most likely it was intentionally. If you think I made a mistake, please tell me.
{% if page.url contains '/akarmi' %}
If you are looking for the pictures that used to be here, you should definitely contact me. For reasons.
{% endif %}

@ -0,0 +1 @@
gergely.polonkai.eu

@ -0,0 +1,3 @@
source 'https://rubygems.org'
gem 'github-pages'

@ -1,74 +0,0 @@
PY?=python3
PELICAN?=pelican
PELICANOPTS=
BASEDIR=$(CURDIR)
INPUTDIR=$(BASEDIR)/content
OUTPUTDIR=$(BASEDIR)/output
CONFFILE=$(BASEDIR)/pelicanconf.py
PUBLISHCONF=$(BASEDIR)/publishconf.py
DEBUG ?= 0
ifeq ($(DEBUG), 1)
PELICANOPTS += -D
endif
RELATIVE ?= 0
ifeq ($(RELATIVE), 1)
PELICANOPTS += --relative-urls
endif
help:
@echo 'Makefile for a pelican Web site '
@echo ' '
@echo 'Usage: '
@echo ' make html (re)generate the web site '
@echo ' make clean remove the generated files '
@echo ' make regenerate regenerate files upon modification '
@echo ' make publish generate using production settings '
@echo ' make serve [PORT=8000] serve site at http://localhost:8000'
@echo ' make serve-global [SERVER=0.0.0.0] serve (as root) to $(SERVER):80 '
@echo ' make devserver [PORT=8000] serve and regenerate together '
@echo ' make ssh_upload upload the web site via SSH '
@echo ' make rsync_upload upload the web site via rsync+ssh '
@echo ' '
@echo 'Set the DEBUG variable to 1 to enable debugging, e.g. make DEBUG=1 html '
@echo 'Set the RELATIVE variable to 1 to enable relative urls '
@echo ' '
html:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
clean:
[ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)
regenerate:
$(PELICAN) -r $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
serve:
ifdef PORT
$(PELICAN) -l $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS) -p $(PORT)
else
$(PELICAN) -l $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
endif
serve-global:
ifdef SERVER
$(PELICAN) -l $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS) -p $(PORT) -b $(SERVER)
else
$(PELICAN) -l $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS) -p $(PORT) -b 0.0.0.0
endif
devserver:
ifdef PORT
$(PELICAN) -lr $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS) -p $(PORT)
else
$(PELICAN) -lr $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
endif
publish:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(PUBLISHCONF) $(PELICANOPTS)
.PHONY: html help clean regenerate serve serve-global devserver publish

@ -1,18 +1,2 @@
# Website of Gergely Polonkai
## Publishing to the web
```
poetry run pelican content
rsync --archive --verbose --human-readable --delete output/ /data/gergely.polonkai.eu/html
restorecon -vr /data/gergely.polonkai.eu/html
```
## Publishing to IPFS
```
poetry run pelican -s ipfspublish.py -o ipfs-version content
cd ipfs-version
ipfs add -r .
ipfs name publish --key=gergely.polonkai.eu /ipfs/XXX
```
gergelypolonkai.github.io
=========================

@ -0,0 +1,19 @@
# Site settings
title: Gergely Polonkai
email: gergely@polonkai.eu
description: "developer, systems engineer and administrator"
baseurl: ""
url: "http://gergely.polonkai.eu"
timezone: Europe/Budapest
name: Gergely Polonkai
paginate: 10
paginate_path: "/blog/page/:num"
exclude: ['README.md', 'Gemfile', 'Gemfile.lock', 'CNAME', ".hyde.el"]
include: ['.well-known']
plugins:
- jekyll-gist
- jekyll-paginate
# Build settings
markdown: kramdown
permalink: pretty

@ -0,0 +1,42 @@
- text: E-mail
link: mailto:gergely@polonkai.eu
image: email.png
- text: Stack Exchange
link: http://stackexchange.com/users/1369500/gergelypolonkai
image: stackexchange.png
- text: LinkedIn
link: http://www.linkedin.com/in/gergelypolonkai
image: linkedin.png
- text: Skype
link: skype:gergely.polonkai
image: skype.png
- text: Facebook
link: http://facebook.com/Polesz
image: facebook.png
- text: Google+
link: https://plus.google.com/+GergelyPolonkai/about
image: google_plus.png
- text: Twitter
link: http://twitter.com/GergelyPolonkai
image: twitter.png
- text: Tumblr
link: http://gergelypolonkai.tumblr.com
image: tumblr.png
- text: deviantArt
link: http://gergelypolonkai.deviantart.com
image: deviantart.png
- text: Hashnode
link: https://hashnode.com/@gergelypolonkai
image: hashnode.png
- text: Keybase
link: https://keybase.io/gergelypolonkai
image: keybase.png
- text: Liberapay
link: https://liberapay.com/gergelypolonkai
image: liberapay.png
- text: Mastodon
link: https://social.polonkai.eu/@gergely
image: mastodon.png
- text: Pay me a coffee
link: https://paypal.me/GergelyPolonkai/250
image: paypal.png

File diff suppressed because it is too large Load Diff

@ -0,0 +1,15 @@
---
layout: post
title: "GtkActionable in action"
author:
name: "Gergely Polonkai"
email: "gergely@polonkai.eu"
---
I have seen several people (including myself) struggling with
disabling/enabling menu items, toolbar buttons and similar UI
interfaces based on different conditions. It gets even worse if there
are multiple representations of the same action in the same
application, e.g. a menu item and a toolbar button exists for the same
action. But with GTK+ 3.4, we have GtkAction, which is exactly for
this kind of situations.

@ -0,0 +1,17 @@
---
layout: post
title: "Measuring code coverage with codecov for libtool projects"
author:
name: "Gergely Polonkai"
email: "gergely@polonkai.eu"
---
I have recently found [codecov][https://codecov.io/]; they offer free
services for public GitHub projects. As I have recently started writing
tests for my SWE-GLib project, I decided to give it a go. Things are not
this easy if you use GNU Autotools and libtool, though…
The problem here is that these tools generate output under `src/.libs/`
(given that your sources are under `src/`) and `gcov` has hard times
finding the coverage data files. Well, at least in the codecov
environment, it works fine on my machine.

@ -0,0 +1,326 @@
---
layout: post
title: "Writing a GNOME Shell extension"
---
I could not find a good tutorial on how to write a GNOME Shell
extension. There is a so called step by step
[instruction list](https://wiki.gnome.org/Projects/GnomeShell/Extensions/StepByStepTutorial)
on how to do it, but it has its flaws, including grammar and clearance.
As I wanted to create an extension for my SWE GLib library to display
the current position of some planets, I dug into existing (and working)
extensions source code and made up something. Comments welcome!
---
GNOME Shell extensions are written in JavaScript and are interpreted
by [GJS](https://wiki.gnome.org/action/show/Projects/Gjs). Using
introspected libraries from JavaScript is not a problem for me (see
SWE GLibs
[Javascript example](https://github.com/gergelypolonkai/swe-glib/blob/master/examples/basic.js);
its not beautiful, but its working), but wrapping your head around
the Shells concept can take some time.
The Shell is a Clutter stage, and all the buttons (including the
top-right “Activities” button) are actors on this stage. You can add
practically anything to the Shell panel that you can add to a Clutter
stage.
The other thing to remember is the lifecycle of a Shell
extension. After calling `init()`, there are two ways forward: you
either use a so called extension controller, or plain old JavaScript
functions `enable()` and `disable()`; I will go on with the former
method for reasons discussed later.
If you are fine with the `enable()`/`disable()` function version, you
can ease your job with the following command:
```
gnome-shell-extension-tool --create-extension
```
This will ask you a few parameters and create the necessary files for
you. On what these parameters should look like, please come with me to
the next section.
## Placement and naming
Extensions reside under `$HOME/.local/share/gnome-shell/extensions`,
where each of them have its own directory. The directory name has to be
unique, of course; to achieve this, they are usually the same as the
UUID of the extension.
The UUID is a string of alphanumeric characters, with some extras added.
Generally, it should match this regular expression:
`^[-a-zA-Z0-9@._]+$`. The convention is to use the form
`extension-name@author-id`, e.g. `Planets@gergely.polonkai.eu`. Please
see
[this link](https://wiki.gnome.org/Projects/GnomeShell/Extensions/UUIDGuidelines)
for some more information about this.
## Anatomy of an extension
Extensions consist of two main parts, `metadata.json` and
`extension.js`.
The `metadata.json` file contains compatibility information and, well,
some meta data:
```json
{
"shell-version": ["3.18"],
"uuid": "planets@gergely.polonkai.eu",
"name": "Planets",
"description": "Display current planet positions"
}
```
Here, `shell-version` must contain all versions of GNOME Shell that is
known to load and display your extension correctly. You can insert minor
versions here, like I did, or exact version numbers, like `3.18.1`.
In the `extension.js` file, which contains the actual extension code,
the only thing you actually need is an `init()` function:
```javascript
function init(extensionMeta) {
// Do whatever it takes to initialize your extension, like
// initializing the translations. However, never do any widget
// magic here yet.
// Then return the controller object
return new ExtensionController(extensionMeta);
}
```
## Extension controller
So far so good, but what is this extension controller thing? It is an
object which is capable of managing your GNOME Shell extension. Whenever
the extension is loaded, its `enable()` method is called; when the
extension is unloaded, you guessed it, the `disable()` method gets
called.
```javascript
function ExtensionController(extensionMeta) {
return {
extensionMeta: extensionMeta,
extension: null,
enable: function() {
this.extension = new PlanetsExtension(this.extensionMeta);
Main.panel.addToStatusArea("planets",
this.extension,
0, "right");
},
disable: function() {
this.extension.actor.destroy();
this.extension.destroy();
this.extension = null;
}
}
}
```
This controller will create a new instance of the `PlanetsExtension`
class and add it to the panels right side when loaded. Upon
unloading, the extensions actor gets destroyed (which, as you will
see later, gets created behind the scenes, not directly by us),
together with the extension itself. Also, for safety measures, the
extension is set to `null`.
## The extension
The extension is a bit more tricky, as, for convenience reasons, it
should extend an existing panel widget type.
```javascript
function PlanetsExtension(extensionMeta) {
this._init(extensionMeta);
}
PlanetsExtension.prototype = {
__proto__ = PanelMenu.Button.prototype,
_init: function(extensionMeta) {
PanelMenu.Button.prototype._init.call(this, 0.0);
this.extensionMeta = extensionMeta;
this.panelContainer = new St.BoxLayout({style_class: 'panel-box'});
this.actor.add_actor(this.panelContainer);
this.actor.add_style_class_name('panel-status-button');
this.panelLabel = new St.Label({
text: 'Loading',
y_align: Clutter.ActorAlign.CENTER
});
this.panelContainer.add(this.panelLabel);
}
};
```
Here we extend the Button class of panelMenu, so we will be able to do
some action upon activate.
The only parameter passed to the parents `_init()` function is
`menuAlignment`, with the value `0.0`, which is used to position the
menu arrow. (_Note: I cannot find any documentation on this, but it
seems that with the value `0.0`, a menu arrow is not added._)
The extension class in its current form is capable of creating the
actual panel button displaying the text “Loading” in its center.
## Loading up the extension
Now with all the necessary import lines added:
```javascript
// The PanelMenu module that contains Button
const PanelMenu = imports.ui.panelMenu;
// The St class that contains lots of UI functions
const St = imports.gi.St;
// Clutter, which is used for displaying everything
const Clutter = imports.gi.Clutter;
```
As soon as this file is ready, you can restart your Shell (press
Alt-F2 and enter the command `r`), and load the extension with
e.g. the GNOME Tweak Tool. You will see the Planets button on the
right. This little label showing the static text “Planets”, however,
is pretty boring, so lets add some action.
## Adding some periodical change
Since the planets position continuously change, we should update our
widget every minute or so. Lets patch our `_init()` a bit:
```javascript
this.last_update = 0;
MainLoop.timeout_add(1, Lang.bind(this, function() {
this.last_update++;
this.panelLabel.set_text("Update_count: " + this.last_update);
}))
```
This, of course, needs a new import line for `MainLoop` to become available:
```javascript
const MainLoop = imports.mainloop;
const Lang = imports.lang;
```
Now if you restart your Shell, your brand new extension will increase
its counter every second. This, however, presents some problems.
SWE GLib queries can sometimes be expensive, both in CPU and disk
operations, so updating our widget every second may present problems.
Also, planets dont go **that** fast. We may update our timeout value
from `1` to `60` or something, but why dont just give our user a chance
to set it?
## Introducing settings
Getting settings from `GSettings` is barely straightforward, especially
for software installed in a non-GNOME directory (which includes
extensions). To make our lives easier, I copied over a
[convenience library](https://github.com/projecthamster/shell-extension/blob/master/convenience.js)
from the [Hamster project](https://projecthamster.wordpress.com/)s
extension, originally written by Giovanni Campagna. The relevant
function here is `getSettings()`:
```javascript
/**
* getSettings:
* @schema: (optional): the GSettings schema id
*
* Builds and return a GSettings schema for @schema, using schema files
* in extensionsdir/schemas. If @schema is not provided, it is taken from
* metadata['settings-schema'].
*/
function getSettings(schema) {
let extension = ExtensionUtils.getCurrentExtension();
schema = schema || extension.metadata['settings-schema'];
const GioSSS = Gio.SettingsSchemaSource;
// check if this extension was built with "make zip-file", and thus
// has the schema files in a subfolder
// otherwise assume that extension has been installed in the
// same prefix as gnome-shell (and therefore schemas are available
// in the standard folders)
let schemaDir = extension.dir.get_child('schemas');
let schemaSource;
if (schemaDir.query_exists(null))
schemaSource = GioSSS.new_from_directory(schemaDir.get_path(),
GioSSS.get_default(),
false);
else
schemaSource = GioSSS.get_default();
let schemaObj = schemaSource.lookup(schema, true);
if (!schemaObj)
throw new Error('Schema ' + schema + ' could not be found for extension '
+ extension.metadata.uuid + '. Please check your installation.');
return new Gio.Settings({ settings_schema: schemaObj });
}
```
You can either incorporate this function into your `extension.js` file,
or just use `convenience.js` file like I (and the Hamster applet) did
and import it:
```javascript
const ExtensionUtils = imports.misc.extensionUtils;
const Me = ExtensionUtils.getCurrentExtension;
const Convenience = Me.imports.convenience;
```
Now lets create the settings definition. GSettings schema files are XML
files. We want to add only one settings for now, the refresh interval.
```xml
<?xml version="1.0" encoding="utf-8"?>
<schemalist>
<schema id="org.gnome.shell.extensions.planets" path="/org/gnome/shell/extensions/planets/">
<key name="refresh-interval" type="i">
<default>30</default>
<summary>Refresh interval of planet data</summary>
<description>Interval in seconds. Sets how often the planet positions are recalculated. Setting this too low (e.g. below 30) may raise performance issues.</description>
</key>
</schema>
</schemalist>
```
you need to compile these settings with
glib-compile-schemas --strict schemas/
Now lets utilize this new setting. In the extensions `_init()`
function, add the following line:
```javascript
this._settings = Convenience.getSettings();
```
And, for `getSettings()` to work correctly, we also need to extend our
`metadata.json` file:
```json
"settings-schema": "planets"
```
After another restart (please, GNOME guys, add an option to reload
extensions!), your brand new widget will refresh every 30 seconds.
## Displaying the planet positions
## The settings panel
## Start an application

@ -0,0 +1,67 @@
---
layout: post
title: "Lessens you learn while writing an SDK"
date: 2016-03-19 12:34:56
tags: [development]
published: false
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
In the last few months Ive been working on a GLib based SDK for
client applications that want to communicate with a Matrix.org
homeserver.
For whoever doesnt know it, Matrix is a decentralized network of
servers (Homeservers). Clients can connect to them via HTTP and send
messages (events, in Matrix terminology) to each other. They are
called events because these messages can be pretty much anything from
instant messages through automated notifications to files or, well,
actual events (such as a vCalendar); anything that you can serialize
to JSON can go through this network.
My original intention was to integrate Matrix based chat into
Telepathy, a DBus based messaging framework used by e.g. the GNOME
desktop (more specifically Empathy, GNOME's chat client.) After
announcing my plans among the Matrix devs, I quickly learned some
things:
1. they are more than open to any development ideas
1. they really wanted to see this working
1. they would have been happy if there were a GLib or Qt based SDK
With my (far from complete) knowledge in GLib I decided to move on
with this last point, hoping that it will help me much when I finally
implement the Telepathy plugin.
## Matrix devs are open minded
What I learned very quickly is that Matrix devs are very open minded
folks from different parts of the world. They are all individuals with
their own ideas, experiences and quirks, yet, when it comes to that,
they steer towards their goals as a community. Thus, getting
additional information from them while reading the spec was super
easy.
## The specification is easy to understand
Except when it is not. For these cases, see the previous point.
Jokes asidu, anyone who worked with communications protocols or JSON
APIs before can get along with it fast. The endpoints are all
documented, and if something is unclear, they are happy to help
(especially if you patch up the spec afterwards.)
## Copying the SDK for a different language is not (always) what you want
I started my SDK in C, trying to mimic the Python SDK. This was a
double fail: the Python SDK was a volatile WiP, and C and Python are
fundamentally different.
During the upcoming weeks this became clear and I switched to the Vala
language. It is much easier to write GObject based stuff in Vala,
although I had to fall back to C to get some features working. I also
planned and implemented a more object oriented API, which is easier to
use in the GObject world.

@ -0,0 +1,27 @@
<p>
Gergely Polonkai is a systems engineer of a telco company, and
also a freelancer self- and software developer.
</p>
<p>
He is learning about different IT subjects since the late
1990s. These include web development, application building,
systems engineering, IT security and many others. He also dug his
nose deeply into free software, dealing with different types of
Linux and its applications,
while also writing and contributing to some open source projects.
</p>
<p>
On this site he is writing posts about different stuff he faces
during work (oh my, yet another IT solutions blog), hoping they
can help others with their job, or just to get along with their
brand new netbook that shipped with Linux.
</p>
<p>
“I believe one can only achieve success if they follow their own
instincts and listen to, but not bend under others opinions. If
you change your course just because someone says so, you are
following their instincts, not yours.”
</p>

@ -0,0 +1,49 @@
<article class="{% if page.post_listing %}col-sm-5 col-md-6 {% endif%}post">
{% if page.post_listing %}
<ul class="list-inline">
<li class="col-md-8">
{% endif %}
<header class="post-header">
{% if page.tag %}
<h5>
{% else %}
<h3>
{% endif %}
{% if page.post_listing %}
<a href="{{post.url | prepend: site.baseurl}}">
{% endif %}
{{post.title}}
{% if page.post_listing %}
</a>
{% endif %}
{% if layout.render_post %}
<div class="plusone-container"><div class="g-plusone" data-annotation="inline" data-size="small" data-width="300"></div></div>
{% endif %}
{% if page.tag %}
</h5>
{% else %}
</h3>
{% endif %}
<div class="meta pull-left">
{{post.author.name}}
</div>
<div class="meta pull-right">
{{post.date | date: "%b %-d, %Y :: %H:%M"}}
</div>
<div class="clearfix"></div>
</header>
<main>
{% if layout.render_post %}
{{content}}
{% else %}
{{post.excerpt}}
{% endif %}
</main>
{% include tag-link.html %}
{% if layout.post_listing %}
</li>
</ul>
{% endif %}
</article>

@ -0,0 +1,14 @@
<div id="disqus_thread"></div>
<script type="text/javascript">
var disqus_shortname = 'gergelypolonkai';
(function() {
var dsq = document.createElement('script');
dsq.type = 'text/javascript';
dsq.async = true;
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();
</script>
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
<a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>

@ -0,0 +1,16 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="keywords" content="{{page.keywords}}">
<meta name="description" content="Personal page of Gergely Polonkai">
<title>Gergely Polonkai{% if page.title %}: {{page.title}}{% endif %}</title>
<link rel="icon" type="image/x-icon" href="{{'/favicon.ico' | prepend: site.baseurl}}">
<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,700italic,800,800italic" rel="stylesheet" type="text/css">
<link rel="alternate" type="application/rss+xml" title="Gergely Polonkai's Blog - RSS Feed" href="{{site.url}}/blog/atom.xml">
<link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css">
<link rel="stylesheet" href="{{'/css/style.css' | prepend: site.baseurl}}">
<link href="https://cdnjs.cloudflare.com/ajax/libs/jquery.terminal/1.6.3/css/jquery.terminal.min.css" rel="stylesheet"/>
<script type="text/javascript" src="//code.jquery.com/jquery-2.1.3.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery.terminal/1.6.3/js/jquery.terminal.min.js"></script>

@ -0,0 +1,35 @@
<div class="navbar navbar-inverse navbar-fixed-top">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#gp-navbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="{{'/' | prepend: site.baseurl}}"><img src="{{'/images/profile.svg' | prepend: site.baseurl}}" alt="Gergely Polonkai" style="background-color: white; height: 45px; margin-top: -13px;"></a>
{% if page.url != '/' %}
<a class="navbar-brand" href="{{'/' | prepend: site.baseurl}}">Gergely Polonkai</a>
{% endif %}
</div>
<div class="collapse navbar-collapse" id="gp-navbar">
<ul class="nav navbar-nav">
<li><a href="{{'/blog' | prepend: site.baseurl}}">Blog</a></li>
<li><a href="{{'/resume' | prepend: site.baseurl}}">Resume</a></li>
<li><a href="{{'/stories' | prepend: site.baseurl}}">Stories</a></li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="https://about.me/gergely.polonkai">about.me</a></li>
<li><a href="{{'/disclaimer' | prepend: site.baseurl}}">Disclaimer</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><span class="glyphicon glyphicon-pencil"></span> Contact me <span class="caret"></span></a>
<ul class="dropdown-menu" role="menu">
{% for contact in site.data.contacts %}
<li><a href="{{contact.link}}" target="_blank"><img src="{{'/images/contact/' | prepend: site.baseurl | append: contact.image}}" alt="" /> {{contact.text}}</a></li>
{% endfor %}
<li><a href="{{'/blog/atom.xml' | prepend: site.baseurl}}"><img src="{{'/images/contact/feed.png' | prepend: site.baseurl}}" alt="" /> RSS Feed</a></li>
</ul>
</li>
</ul>
</div>
</div>
</div>

@ -0,0 +1,17 @@
<nav>
<ul class="pagination">
<li{% if paginator.previous_page == null %} class="disabled"{% endif %}>
<a href="{{paginator.previous_page_path | prepend: site.baseurl | replace: '//', '/'}}" aria-label="Previous page">
<span aria-hidden="true">&laquo;</span>
</a>
</li>
{% for page in (1..paginator.total_pages) %}
<li{% if paginator.page == page %} class="active"{% endif %}><a href="{% if page == 1 %}{{'/blog' | prepend: site.baseurl}}{% else %}{{site.paginate_path | prepend: site.baseurl | replace: '//', '/' | replace: ':num', page}}{% endif %}">{{page}}</a></li>
{% endfor %}
<li{% if paginator.next_page == null %} class="disabled"{% endif %}>
<a href="{{paginator.next_page_path | prepend: site.baseurl | replace: '//', '/'}}" aria-label="Next page">
<span aria-hidden="true">&raquo;</span>
</a>
</li>
</ul>
</nav>

@ -0,0 +1,9 @@
<div class="container-fluid">
{% for post in posts limit: post_limit %}
{% capture counter %}{% cycle 'odd', 'even' %}{% endcapture %}
{% include blog-post.html %}
{% if counter == 'even' %}
<div class="clearfix"></div>
{% endif %}
{% endfor %}
</div>

@ -0,0 +1,11 @@
{% capture tagsize %}{{post.tags | size}}{% endcapture %}
{% if tagsize != '0' %}
<footer>
<p class="article-tags">
{% for tag in post.tags %}
<a href="{{tag | prepend: '/blog/tag/' | prepend: site.baseurl}}" class="tag-label">{{tag}}</a>
{% endfor %}
</p>
<br class="clearfix">
</footer>
{% endif %}

@ -0,0 +1,130 @@
<!DOCTYPE html>
<html>
<head>
{% include head.html %}
</head>
<body>
{% include header.html %}
<div class="container" id="main-container">
{{content}}
{% if page.name != 'about.html' %}
<div class="well well-sm small">
<div class="pull-left" id="about-well-image">
<a href="{{ site_url }}/about/">
<img src="{{'/images/profile.svg' | prepend: site.baseurl}}" alt="">
</a>
</div>
{% include about.html %}
<div class="clearfix"></div>
</div>
{% endif %}
</div>
<script type="text/javascript">
$(document).ready(function() {
$('#tagcloud-button').click(function() {
$('#tag-cloud').toggle('slow');
});
});
(function() {
var po = document.createElement('script');
po.type = 'text/javascript';
po.async = true;
po.src = 'https://apis.google.com/js/client:plusone.js?onload=start';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(po, s);
})();
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-43569023-1', 'polonkai.eu');
ga('send', 'pageview');
jQuery.extend_if_has = function(desc, source, array) {
for (var i=array.length;i--;) {
if (typeof source[array[i]] != 'undefined') {
desc[array[i]] = source[array[i]];
}
}
return desc;
};
(function($) {
$.fn.tilda = function(eval, options) {
if ($('body').data('tilda')) {
return $('body').data('tilda').terminal;
}
this.addClass('tilda');
options = options || {};
eval = eval || function(command, term) {
term.echo("you don't set eval for tilda");
};
var settings = {
prompt: 'guest@gergely.polonkai.eu> ',
name: 'tilda',
height: 400,
enabled: false,
greetings: 'Welcome to my Terminal. Type `help\' to list the available commands.\n\nPowered by http://terminal.jcubic.pl',
keypress: function(e) {
if (e.which == 96) {
return false;
}
}
};
if (options) {
$.extend(settings, options);
}
this.append('<div class="td"></div>');
var self = this;
self.terminal = this.find('.td').terminal(eval, settings);
var focus = false;
$(document.documentElement).keypress(function(e) {
console.log(e);
if (e.which == 96) {
self.slideToggle('fast');
self.terminal.focus(focus = !focus);
self.terminal.attr({
scrollTop: self.terminal.attr("scrollHeight")
});
}
});
$('body').data('tilda', this);
this.hide();
return self;
};
})(jQuery);
String.prototype.strip = function(char) {
return this.replace(new RegExp("^\\s*"), '')
.replace(new RegExp("\\s*$"), '');
}
jQuery(document).ready(function($) {
$('#tilda').tilda(function(command, terminal) {
command = command.strip();
switch (command) {
case 'help':
terminal.echo('about - Go to the about page');
terminal.echo(' ');
terminal.echo('More commands will follow soon!');
break;
case 'about':
location = '{{ site_url }}/about/';
break;
default:
terminal.echo(command + ': command not found');
break;
}
});
});
</script>
<div id="tilda"></div>
</body>
</html>

@ -0,0 +1,15 @@
---
layout: default
---
<div class="post">
<header class="post-header">
<h2>{{page.title}}</h2>
<div class="clearfix"></div>
</header>
<article class="post-content">
{{content}}
</article>
</div>

@ -0,0 +1,20 @@
---
layout: default
render_post: true
---
{% assign post = page %}
{% include blog-post.html %}
<div class="g-plus" data-action="share" data-height="15"></div>
<nav>
<ul class="pager">
{% if page.previous %}
<li class="previous"><a href="{{page.previous.url | prepend: site.baseurl}}">&larr; {{page.previous.title}}</a></li>
{% endif %}
{% if page.next %}
<li class="next"><a href="{{page.next.url | prepend: site.baseurl}}">{{page.next.title}} &rarr;</a></li>
{% endif %}
</ul>
</nav>
{% include disqus.html %}

@ -0,0 +1,15 @@
---
layout: default
post_listing: true
---
<h3 class="tag">{{ page.tag }}</h3>
{{content}}
<h4>Articles under this tag</h4>
{% if site.tags[page.tag] %}
{% assign posts = site.tags[page.tag] %}
{% include post-list.html %}
{% else %}
No posts with this tag.
{% endif %}

@ -0,0 +1,43 @@
#! /bin/sh
#
# Find all tags in all posts under _posts, and generate a file for
# each under blog/tag. Also, if a tag page does not contain the tag:
# or layout: keywords, the script will include them in the front
# matter.
layout="posts-by-tag"
for tag in `grep -h ^tags: _posts/* | sed -re 's/^tags: +\[//' -e 's/\]$//' -e 's/, /\n/g' | sort | uniq`
do
tag_file="blog/tag/${tag}.md"
echo -n "[$tag] "
if [ ! -f $tag_file ]
then
echo "creating ($tag_file)"
cat <<EOF > $tag_file
---
layout: $layout
tag: $tag
---
EOF
else
updated=0
if ! egrep "^tag: +${tag}$" $tag_file 2>&1 > /dev/null; then
echo "adding tag"
sed -i "0,/---/! s/---/tag: $tag\\n---/" $tag_file
updated=1
fi
if ! egrep "^layout: +" $tag_file 2>&1 > /dev/null; then
echo "adding layout"
sed -i "0,/---/! s/---/layout: $layout\\n---/" $tag_file
updated=1
fi
if [ $updated = 0 ]; then
echo ""
fi
fi
done

@ -0,0 +1,29 @@
---
layout: post
title: "Ethical Hacking 2012"
date: 2011-05-12 20:54:42
tags: [conference]
permalink: /blog/2011/5/12/ethical-hacking-2011
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Today I went to the Ethical Hacking conference with my boss. It was my first
appearance at such conferences, but I hope there will be more. Although we
just started to redesign our IT security infrastructure with a 90% clear goal,
it was nice to hear that everything is vulnerable. I was thinking if we should
sell all our IT equipments, fire all our colleagues (you know, to prevent
social engineering), and move to the South Americas to herd llamas or sheep,
so the only danger would be some lurking pumas or jaguars. Or I simply leave
my old background image on my desktop, from the well-known game, which says:
Trust is a weakness.
Anyways, the conference was really nice. We heard about the weaknesses of
Android, Oracle, and even FireWire. They showed some demos about everything,
exploited some free and commercial software with no problem at all. We have
seen how much power the virtualisation admin has (although I think it can be
prevented, but Im not sure yet). However, in the end, we could see that the
Cloud is secure (or at least it can be, in a few months or so), so Im not
totally pessimistic. See you next time at Hacktivity!

@ -0,0 +1,88 @@
---
layout: post
title: "Gentoo hardened desktop with GNOME 3 Round one"
date: 2011-05-12 20:32:41
tags: [gentoo, gnome3, selinux]
permalink: /blog/2011/5/12/gentoo-hardened-desktop-with-gnome-3-round-one
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
After having some hard times with Ubuntu (upgrading from 10.10 to 11.04), I
decided to switch back to my old friend, Gentoo. As Im currently learning
about Linux hardening, I decided to use the new SELinux profile, which
supports the v2 reference policy.
Installation was pretty easy, using the [Gentoo x86
Handbook](http://www.gentoo.org/doc/hu/handbook/handbook-x86.xml). This profile
automatically turns on the `USE=selinux` flag (so does the old SELinux
profile), but deprecated `FEATURE=loadpolicy` (which is turned on by the
profile, so portage will complain about it until you disable it in
`/etc/make.conf`).
For the kernel, I chose `hardened-sources-2.6.37-r7`. This seems to be recent
enough for my security testing needs. I turned on both SELinux, PaX and
grsecurity. So far, I have no problem with it, but I dont have X installed
yet, which will screw up things for sure.
After having those hard times with Ubuntu mentioned before, I decided not to
install Grub2 yet, as it renders things unusable (eg. my Windows 7
installation, which I sometimes need at the office). So I installed Grub 0.97
(this is the only version marked as stable, as I remember), touched
`/.autorelabel`, and reboot.
My first mistake was using an UUID as the root device on the kernel parameter
list (I dont want to list all the small mistakes like forgetting to include to
correct SATA driver from my kernel and such). Maybe I was lame, but after
including `/dev/sda5` instead of the UUID thing, it worked like…
Well, charm would not be the good word. For example, I forgot to install the
lvm2 package, so nothing was mounted except my root partition. After I
installed it with the install CD, I assumed everything will be all right, but
I was wrong.
udev and LVM is a critical point in a hardened environment. udev itself
doesnt want to work without the `CONFIG_DEVFS_TEMPFS=y` kernel option, so I
also had to change that. It seemed that it can be done without the install CD,
as it compiled the kernel with no problems. However, when it reached the point
when it compresses the kernel with gzip, it stopped with a `Permission denied`
message (although it was running with root privileges).
The most beautiful thing in the hardened environment with Mandatory Access
Control enabled) is that root is not a real power user any more by default.
You can get this kind of messages many times. There are many tools to debug
these, I will talk about these later.
So, my gzip needed a fix. After digging a bit on the Internet, I found that
the guilty thing is text relocation, which can be corrected if gzip is
compiled with PIC enabled. Thus, I turned on `USE=pic` flag globally, and
tried to remerge gzip. Of course it failed, as it had to use gzip to unpack
the gzip sources. So it did when I tried to install the PaX tools and gradm to
turn these checks off. The install CD came to the rescue again, with which I
successfully recompiled gzip, and with this new gzip, I compressed my new
kernel, with which udev started successfully. So far, so good, lets try to
reboot!
Damn, LVM is still not working. So I decided to finally consult the Gentoo
hardened guide. It says that the LVM startup scripts under `/lib/rcscripts/…`
must be modified, so LVM will put its lock files under `/etc/lvm/lock` instead
of `/dev/.lvm`. After this step and a reboot, LVM worked fine (finally).
The next thing was the file system labelling. SELinux should automatically
relabel the entire file system at boot time whenever it finds the
`/.autorelabel` file. Well, in my case it didnt happen. After checking the
[Gentoo Hardening](http://wiki.gentoo.org/wiki/Hardened_Gentoo) docs, I realised that the `rlpkg` program does exactly the same
(as far as I know, it is designed specifically for Gentoo). So I ran `rlpkg`,
and was kind of shocked. It says it will relabel ext2, ext3, xfs and JFS
partitions. Oh great, no ext4 support? Well, consulting the forums and adding
some extra lines to `/etc/portage/package.keywords` solved the problem (`rlpkg`
and some dependencies had to have the `~x86` keyword set). Thus, `rlpkg`
relabelled my file systems (I checked some directories with `ls -lZ`, it seemed
good for me).
Now it seems that everything is working fine, except the tons of audit
messages. Tomorrow I will check them with `audit2why` or `audit2allow` to see if
it is related with my SELinux lameness, or with a bug in the policy included
with Gentoo.

@ -0,0 +1,35 @@
---
layout: post
title: "Zabbix performance tip"
date: 2011-05-13 19:03:31
tags: [zabbix, monitoring]
permalink: /blog/2011/5/13/zabbix-performance-tip
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Recently I have switched from [MRTG](http://oss.oetiker.ch/mrtg/) + [Cacti](http://www.cacti.net/) + [Nagios](http://www.nagios.org/) + [Gnokii](http://www.gnokii.org/) to [Zabbix](http://www.zabbix.com/), and I
must say Im more than satisfied with it. It can do anything the former tools
did, and much more. First of all, it can do the same monitoring as Nagios did,
but it does much more fine. It can check several parameters within one
request, so network traffic is kept down. Also, its web front-end can generate
any kinds of graphs from the collected data, which took Cacti away. Also, it
can do SNMP queries (v1-v3), so querying my switches port states and traffic
made easy, taking MRTG out of the picture (I know Cacti can do it either, it
had historical reasons we had both tools installed). And the best part: it can
send SMS messages via a GSM modem natively, while Nagios had to use Gnokii.
The trade-off is, I had to install Zabbix agent on all my monitored machines,
but I think it worths the price. I even have had to install NRPE to monitor
some parameters, which can be a pain on Windows hosts, while Zabbix natively
supports Windows, Linux and Mac OS/X.
So I only had to create a MySQL database (which I already had for NOD32
central management), and install Zabbix server. Everything went fine, until I
reached about 1300 monitored parameters. MySQL seemed to be a bit slow on disk
writes, so my Zabbix “queue” filled up in no time. After reading some forums,
I decided to switch to PostgreSQL instead. Now it works like charm, even with
the default Debian settings. However, I will have to add several more
parameters, and my boss wants as many graphs as you can imagine, so Im more
than sure that I will have to fine tune my database later.

@ -0,0 +1,29 @@
---
layout: post
title: "Gentoo hardened desktop with GNOME 3 Round two"
date: 2011-05-18 10:28:14
tags: [gentoo, gnome3, selinux]
permalink: /blog/2011/5/18/gentoo-hardened-desktop-with-gnome-3-round-two
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
After several hours of `package.keywords`/`package.use` editing and package
compiling, I managed to install GNOME 3 on my notebook. Well, I mean, the
GNOME 3 packages. Unfortunately the fglrx driver didnt seem to recognise my
ATI Mobility M56P card, and the open source driver didnt want to give me GLX
support. When I finally found some clues on what should I do, I had to use my
notebook for work, so I installed Fedora 14 on it. Then I realised that GNOME
3 is already included in Rawhide (Fedora 15), so I quickly downloaded and
installed that instead. Now I have to keep this machine in a working state for
a few days, so I will learn SELinux stuff in its native environment.
When I installed Fedora 14, the first AVC message popped up after about ten
minutes. That was a good thing, as I wanted to see `setroubleshoot` in action.
However, in Fedora 15, the AVC bubbles didnt show up even after a day. I
raised my left eyebrow and said thats impossible, SELinux must be disabled.
And its not! Its even in enforcing mode! And it works just fine. I like it,
and I hope I will be able to get the same results with Gentoo if I can get
back to testing…

@ -0,0 +1,41 @@
---
layout: post
title: "Citrix XenServer 5.5 vs. Debian 5.0 upgrade to 6.0"
date: 2011-05-27 17:33:41
tags: [citrix-xenserver, debian]
permalink: /blog/2011/5/27/citrix-xenserver-vs-debian-5-0-upgrade-to-6-0
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Few weeks ago Ive upgraded two of our Debian based application servers from
5.0 to 6.0. Everything went fine, as the upgraded packages worked well with
the 4.2 JBoss instances. For the new kernel we needed a reboot, but as the
network had to be rebuilt, I postponed this reboot until the network changes.
With the network, everything went fine again, we successfully migrated our
mail servers behind a firewall. Also the Xen server (5.5.0, upgrade to 5.6
still has to wait for a week or so) revolted well with some storage disks
added. But the application servers remained silent…
After checking the console, I realised that they dont have an active console.
And when I tried to manually start them, XenServer refused with a message
regarding pygrub.
To understand the problem, I had to understand how XenServer boots Debian. It
reads the grub.conf on the first partitions root or `/boot` directory, and
starts the first option, without asking (correct me, if Im mistaken
somewhere). However, this pygrub thing can not parse the new, grub2 config.
This is kinda frustrating.
For the first step, I quickly installed a new Debian 5.0 system from my
template. Then I attached the disks of the faulty virtual machine, and mounted
all its partitions. This way I could reach my faulty 6.0 system with a chroot
shell, from which I could install the `grub-legacy` package instead of grub,
install the necessary kernel and XenServer tools (which were missing from both
machines somehow), then halt the rescue system, and start up the original
instance.
Next week I will do an upgrade on the XenServer to 5.6.1. I hope no such
problems will occur.

@ -0,0 +1,25 @@
---
layout: post
title: "Oracle Database “incompatible” with Oracle Linux?"
date: 2011-05-27 17:53:31
tags: [linux, oracle]
permalink: /blog/2011/5/27/oracle-database-incompatible-with-oracle-linux
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Today I gave a shot to install [Oracle
Linux](http://www.oracle.com/us/technologies/linux/overview/index.html). I thought I could easily install
an Oracle DBA on it. Well, I was naive.
As only the 5.2 version is supported by XenServer 5.5, I downloaded that
version of Oracle Linux. Installing it was surprisingly fast and easy, it
asked almost nothing, and booted without any problems.
After this came the DBA, 10.2, which bloated an error message in my face
saying that this is an unsupported version of Linux. Bah.
Is it only me, or is it really strange that Oracle doesnt support their own
distro?

@ -0,0 +1,22 @@
---
layout: post
title: "Proxy only non-existing files with mod_proxy and mod_rewrite"
date: 2011-06-10 14:20:43
tags: [apache]
permalink: /blog/2011/6/10/proxy-only-non-existing-files-with-mod-proxy-and-mod-rewrite
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Today I got an interesting task. I had to upload some pdf documents to a site.
The domain is ours, but we dont have access to the application server that is
hosting the page yet. Until we get it in our hands, I did a trick.
I enabled `mod_rewrite`, `mod_proxy` and `mod_proxy_http`, then added the following
lines to my apache config:
{% gist 47680bfa44eb29708f20 redirect-non-existing.conf %}
Im not totally sure its actually secure, but it works for now.

@ -0,0 +1,30 @@
---
layout: post
title: "Inverse of `sort`"
date: 2011-09-18 14:57:31
tags: [linux, command-line]
permalink: /blog/2011/9/18/inverse-of-sort
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Im using \*NIX systems for about 14 years now, but it can still show me new
things. Today I had to generate a bunch of random names. Ive create a small
perl script which generates permutations of some usual Hungarian first and
last names, occasionally prefixing it with a Dr. title or using double first
names. For some reasons I forgot to include uniqueness check in the script.
When I ran it in the command line, I realized the mistake, so I appended
`| sort | uniq` to the command line. So I had around 200 unique names, but in
alphabetical order, which was awful for my final goal. Thus, I tried shell
commands like rand to create a random order, and when many of my tries failed,
the idea popped in my mind (not being a native English speaker): “I dont have
to create «random order», but «shuffle the list». So I started typing `shu`,
pressed Tab in the Bash shell, and voilà! `shuf` is the winner, it does just
exactly what I need:
**NAME**
shuf - generate random permutations
Thank you, Linux Core Utils! :)

@ -0,0 +1,16 @@
---
layout: post
title: "Why you should always test your software with production data"
date: 2011-12-11 12:14:51
tags: [development, testing, ranting]
permalink: /blog/2011/12/11/why-you-should-always-test-your-software-with-production-data
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Im writing a software for my company in PHP, using the Symfony 2 framework.
Ive finished all the work, created some sample data, it loaded perfectly. Now
I put the whole thing into production and tried to upload the production data
into it. Guess what… it didnt load.

@ -0,0 +1,29 @@
---
layout: post
title: "PHP 5.4 released"
date: 2012-03-20 13:31:12
tags: [php]
permalink: /blog/2012/3/20/php-5-4-released
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
After a long time of waiting, PHP announced 5.4 release on 1 March (also,
today they announced that they finally migrate to Git, which is sweet from my
point of view, but it doesnt really matter).
About a year ago we became very agressive towards a developer who created our
internal e-learning system. Their database was very insecure, and they didnt
really follow industry standards in many ways. Thus, we forced them to move
from Windows + Apache 2.0 + PHP 5.2 + MySQL 4.0 to Debian Linux 6.0 + Apache
2.2 + PHP 5.3 + MySQL 5.1. It was fun (well, from our point of view), as their
coders… well… they are not so good. The code that ran “smoothly” on the
old system failed at many points on the new one. So they code and code, and
write more code. And they still didnt finish. And now 5.4 is here. Okay, I
know it will take some time to get into the Debian repositories, but its
here. And they removed `register_globals`, which will kill that funny code again
at so many points that they will soon get to rewrite the whole code to make it
work. And I just sit here in my so-much-comfortable chair, and laugh. Am I
evil?

@ -0,0 +1,34 @@
---
layout: post
title: "Fast world, fast updates"
date: 2012-03-27 06:18:43
tags: [linux]
permalink: /blog/2012/3/27/fast-world-fast-updates
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
We live in a fast world, thats for sure. When I first heard about Ubuntu
Linux and their goals, I was happy: they gave a Debian to everyone, but in
different clothes. It had fresh software in it, and even they gave support of
a kind. It was easy to install and use, even if one had no Linux experience
before. So people liked it. Ive even installed it on some of my servers
because of the new package versions that came more often. Thus I got an up to
date system. However, it had a price. After a while, security updates came
more and more often, and when I had a new critical update every two or three
days, Ive decided to move back to Debian. Fortunately I did this at the time
of a new release, so I didnt really loose any features.
After a few years passed, even Debian is heading this very same way. But as I
see, the cause is not the same. It seems that upstream software is hitting
these bugs, and even the Debian guys dont have the time to check for them. At
the time of a GNOME version bump (yes, GNOME 3 is a really big one for the
UN\*X-like OSes), when hundreds of packages need to be checked, security bugs
show off more often. On the other hand however, Debian is releasing a new
security update every day (I had one on each of the last three days). This, of
course, is good from one point of view as we get a system that is more secure,
but most administrators dont have maintenance windows this often. I can think
of some alternatives like Fedora, but do I really have to change? Dear fellow
developers, please code more carefully instead!

@ -0,0 +1,28 @@
---
layout: post
title: "Wordpress madness"
date: 2012-06-14 06:40:12
tags: [wordpress, ranting]
permalink: /blog/2012/6/14/wordpress-madness
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Im a bit fed up that I had to install [MySQL](http://www.mysql.com/) on my
server to have [Wordpress](http://wordpress.org/) working, so Ive Googled a
bit to find a solution for my pain. I found
[this](http://codex.wordpress.org/Using_Alternative_Databases). I dont know when
this post was written, but I think its a bit out of date. I mean come on, PDO
is the part of PHP for ages now, and they say adding a DBAL to the dependencies
would be a project as large as (or larger than) WP itself. Well,
yes, but PHP is already a dependency, isnt it? Remove it guys, its too
large!
Okay, to be serious… Having a heavily MySQL dependent codebase is a bad
thing in my opinion, and changing it is no easy task. But once it is done, it
would be a childs play to keep it up to date, and to port WP to other
database backends. And it would be more than enough to call it 4.0, and
raising version numbers fast is a must nowadays (right, Firefox and Linux
Kernel guys?)

@ -0,0 +1,28 @@
---
layout: post
title: "SSH login FAILed on Red Had Enterprise Linux 6.2"
date: 2012-06-18 18:28:45
tags: [linux, selinux, ssh, red-hat]
permalink: /blog/2012/6/18/ssh-login-failed-on-red-hat-enterprise-linux-6-2
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Now this was a mistake I should not have done…
About a month ago I have moved my AWS EC2 machine from Amazon Linux to RHEL
6.2. This was good. I have moved all my files and stuff, recreated my own
user, everything was just fine. Then I copied my
[gitosis](https://github.com/tv42/gitosis) account (user `git` and its home
directory). Then I tried to log in. It failed. I was blaming OpenSSH for a week
or so, changed the config file in several ways, tried to change the permissions
on `~git/.ssh/*`, but still nothing. Permission were denied, I was unable to
push any of my development changes. Now after a long time of trying, I
coincidently `tail -f`-ed `/var/log/audit/audit.log` (wanted to open `auth.log`
instead) and that was my first good point. It told me that `sshd` was unable to
read `~git/.ssh/authorized_keys`, which gave me the idea to run `restorecon` on
`/home/git`. It solved the problem.
All hail SELinux and RBAC!

@ -0,0 +1,35 @@
---
layout: post
title: "Upgrades requiring a reboot on Linux? At last!"
date: 2012-06-22 20:04:51
tags: [linux]
permalink: /blog/2012/6/22/upgrades-requiring-a-reboot-on-linux-at-last
published: true
author:
name: Gergely Polonkai
email: gergely@polonkai.eu
---
Ive recently received an article on Google+ about Fedoras new idea: package
upgrades that require a reboot. The article said that Linux guys have lost
their primary adoo: “Haha! I dont have to reboot my system to install system
upgrades!” My answer was always this: “Well, actually you should…”
I think this can be a great idea if distros implement it well. PackageKit was
a good first step on this road. That software could easily solve such an
issue. However, it is sooo easy to do it wrong. The kernel, of course, can not
be upgraded online (or could it be? I have some theories on this subject,
wonder if it can be implemented…), but other packages are much different.
From the users point of view the best would be if the packages would be
upgraded in the background seemlessly. E.g. PackageKit should check if the
given executable is running. If not, it should upgrade it, while notifying the
user like “Hey dude, dont start Anjuta now, Im upgrading it!”, or simply
denying to start it. Libraries are a bit different, as PackageKit should check