TurnKey Linux Virtual Appliance Library

Suggestions and feedback for Drupal 11RC

dude4linux's picture

I have been planning to move my home website to a TKL Drupal Appliance since the beginning of the year, but had to put the project aside while I planned and executed a six week trip to Europe.  Upon my return, I was pleased to find that a new TurnKey appliance based on Ubuntu 10.04 LTS was now available.  I was even more pleased to find it included the latest stable release of Drush, my favorite admin tool.  As I installed and setup the new server, I thought it might be helpful to document my experience.  Some thoughts from my notes follow.

1. Change the hostname


/etc/hosts myserver.mydomain    myserver
Use the FQDN in /etc/hosts, otherwise apache complains about not finding FQDN.
It would be very helpful if TKL allowed setting the host and domain names during installation and also ability to change them in confconsole.
2. Add alias for root
	root: admin_email_name@email_host

and then run newaliases
# newaliases
As administrator, I want to receive all messages sent to root@localhost.  In the current 11.0 RC, they are rejected as undeliverable (see below).
3. Change Postfix 'myhostname'
	myhostname = localhost
Change 'myhostname' from UNKNOWN to 'localhost'.
This allows Postfix to deliver mail addressed to root@localhost.
4. Set the default locale
# echo "LANG=\"en_US.UTF-8\"" > /etc/default/locale
This eliminates complaints in the logs about /etc/default/locale missing.  I wonder if this could be set during installation based on the keyboard selection.
5. Install logrotate, logwatch, fail2ban, and bsd-mailx
   logwatch monitors daily operations and mails a report to root.
   logrotate manages log files and deletes old logs.
   fail2ban monitors system logs and bans hackers using iptables.
   bsd-mailx allows reading local mail for root and sending test messages.
	# apt-get install logrotate logwatch fail2ban bsd-mailx
I recommend that logwatch, logrotate, and fail2ban be considered for inclusion in all TKL appliances.
6. Create fail2ban jail 'apache-access' 
	# Fail2Ban configuration file for Apache access logs
# Author: John Carver
# $Revision: 102 $


# Option:  failregex
# Notes.:  regex to match botnets scanning for .php files in the logfile. The
#          host must be matched by a group named "host". The tag "<HOST>" can
#          be used for standard IP/hostname matching and is only an alias for
#          (?:::f{4,6}:)?(?P<host>\S+)
# Values:  TEXT

failregex = ^<HOST> -.*GET .*\.php.* 40[34] .*$

# Option:  ignoreregex
# Notes.:  regex to ignore. If this regex matches, the line is ignored.
# Values:  TEXT
ignoreregex = 

Warning:  This is severe punishment for anyone scanning your website looking for unprotected .php files.  It works okay for me, but use with caution. 
7. Enable fail2ban jails
	# Fail2Ban configuration file.



enabled = true
port = ssh
filter = sshd
logpath  = /var/log/auth.log
maxretry = 6


enabled = true
port    = ssh
filter  = sshd-ddos
logpath  = /var/log/auth.log
maxretry = 6

# HTTP servers


enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache2/error.log
maxretry = 6


enabled = true
port    = http,https
filter  = apache-noscript
logpath = /var/log/apache2/error.log
maxretry = 6


enabled = true
port    = http,https
filter  = apache-overflows
logpath = /var/log/apache2/error.log
maxretry = 2


enabled = true
port    = http,https
filter  = apache-access
logpath = /var/log/apache2/access.log
maxretry = 1
bantime = -1

# Mail servers


enabled  = true
port  = smtp,ssmtp
filter   = postfix
logpath  = /var/log/mail.log

# Additional Services


enabled = true
filter = webmin-auth
action = iptables[name=webmin, port=12321, protocol=tcp]
logpath  = /var/log/auth.log
8. Configure NTP per VMware recommendations for virtual hosts.
	tinker panic 0

   Change servers to:
server 0.pool.ntp.org 
server 1.pool.ntp.org
server 2.pool.ntp.org
   or use a regional pool, e.g.
server 0.us.pool.ntp.org 
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
"The configuration directive tinker panic 0 instructs NTP not to give up if it sees a large jump in time. This is important for coping with large time drifts and also resuming virtual machines from their suspended state.
Note: The directive tinker panic 0 must be at the top of the ntp.conf file.
It is also important not to use the local clock as a time source, often referred to as the Undisciplined Local Clock. NTP has a tendency to fall back to this in preference to the remote servers when there is a large amount of time drift."
I had problems formatting the items above because <code>,</code> no longer works and CKEditor seems to indent the first line of each <pre> formatted paragraph.
Adrian Moya's picture

Thanks for all this info!

Your post should be called "Securing the drupal appliance"! This info is very valuable, nice tips here, good practices about security. 

Maybe this could have it's space in the community documentation. If you have any extra tips be sure to post to the forums. 

John Carver's picture

Thanks Adrain

I'll consider writing a tutorial for the community documentation after the final release of 11.0.  I was posting my experience working with the 11.0 RC here in the hopes that some of the suggestions I've mentioned might get included into the final release.  I've also encountered some additional issues that I need to get documented.  I wonder if I should be filing these as bug reports or enhancement requests on the bug tracker.  I'm still not clear on how to formally make a suggestion or request an enhancement.

Information is free, knowledge is acquired, but wisdom is earned.

Adrian Moya's picture

You could do that

Or simply edit the thread's title to something like "Suggestions and feedback for Drupal 11RC" so that you get the attention of the main devs. 

Any documentation you are willing to write down is welcome, I'm currently trying to reorganize a bit the project documentation, and I think some of your notes could go in a "general security tips" section and some in the drupal-specific docs. 

Feel free to use the dev wiki to write down your docs before we pass them to the official docs. You can create your own page under drafts.

John Carver's picture

Running multi-site, multi-root installation

In moving my website from an Ubuntu server to a virtual host running the TKL Drupal appliance, I decided to keep my conventional development environment i.e. development, staging (test), and production all on the same server.  At some point, I will probably move development and staging to a separate virtual host, but for now I'm keeping them together since that's what I'm familiar with.  Setting up a multi-root installation on a TKL appliance, or Debian/Ubuntu for that matter, requires a few adjustments.  One consideration is how to handle updates.  Debian's Drupal package is setup to support a single-root, multi-site installation.  I want to be proactive on security matters and yet I'm cautious about relying on a packaged update automatically being applied to my production server.  Whenever a new security release of Drupal core or third party module is available, I want to apply it to the staging (test) site first and then run a series of checks before pushing it to the production site.  This requires separate 'roots' for each of the Drupal sites so that the staging site can be upgraded without disturbing the production site.  Fortunately, Drush makes upgrading core and updating modules easy and was designed to support multi-site, multi-root installations.  To make things even easier, I setup site-aliases for each of the three sites, @dev, @test, and @live.  Unfortunately, the TKL installation of Drush includes a shell wrapper for drush which breaks it's multi-root capability.


$DRUSH --root=/usr/share/drupal6 $@

This sets the Drupal root to always be /usr/share/drupal6 which can't be overridden by a site alias.

I believe a better way to set the default root for Drush is to put the following in

// $Id: drushrc.php for TurnKey Linux Drupal
// Specify a particular multisite. Change this to your default host name.
$options['l'] = 'http://localhost';

// Specify your Drupal core base directory (useful if you use symlinks).
$options['r'] = '/usr/share/drupal6';

and then link the Drush executable.

ln -s /usr/local/share/drush/drush /usr/local/bin/drush

Drush will then use the default root when none is specified, but it can be overridden by a site-alias or the -r option on the command line.

Information is free, knowledge is acquired, but wisdom is earned.

Alon Swartz's picture

Using drushrc.php instead of wrapper script

I wasn't aware of the different ways you could specify parameters for drush. You made me take a closer look and I like drush even more. I especially like the new aliases support.

Anyway, I've replaced the wrapper script with a symlink, and created /etc/drush/drushrc.php specifying drupal's root. This should get those not very familiar with drush on their feet quickly, but provide power users the flexibility they need.

Thanks again for the great feedback.

Mike Gifford's picture

This going to get removed?

Can this get removed from the default:

I really don't think it's a good practice to hard code this.

By default drush works from the root directory your in which should be good enough.

Alon Swartz's picture

Excellent feedback and suggestions

We are in the process of finalizing the appliances for the final 11.0 release, and I'll be working through your suggestions and including (most of) them in turnkey-core. I also like your suggestion about drush, I'll get something like that implemented.

If you have more feedback, try post it asap so it makes into 11.0. Where to post? The forum seems like the best place. Regarding bugs, up to you really, either the forums or the bug tracker on LP.

We mainly use the bug tracker for bugs that are critical, and need to be in our face, or just as reminders to solve them when we have time as a post in the forum can get lost in the mass.

John Carver's picture

Drupal/Apache Security Checklist

Thanks Alon,

I have been working through the excellent Drupal and Apache Web Site Security Checklist from Nadeau Software Consulting.  Some of the recommendations have already been incorporated in the Drupal appliance and others are controversial, such as the recommendation to hide version information.  I tend to agree with the discussion in the Drupal Admin Guide and think it is more important to ensure you are running the latest version instead of trying to hide the fact you have an older, vulnerable version.  I'll continue to add to my list as quickly as I can.  Some of the recommendations can only be applied after Drupal is running, so they are candidates for a how-to tutorial.

Information is free, knowledge is acquired, but wisdom is earned.

Alon Swartz's picture

(Most) suggestions implemented and feedback

1. Change the hostname - understandable, but not something we can implement globally as it doesn't always make sense.

2. Alias for root - interesting idea. I'm discussing it with Liraz and we might implement it via an inithook, no promises though.

3. Change postfix myhostname - done. in 11.0RC we uncommented myhostname so postfix would calculate it on the fly. In retrospect I don't think this was the best solution, so I've updated it to be set to localhost.

4. Set the default locale - done. the default LANG is en_GB, but I've added /etc/default/locale. Specifying UTF-8 has performance issues (google it), and I can see a good reason for it. Open to discussion.

5. Install logrotate, logwatch, fail2ban, bsd-mailx - partly done. logrotate has been added to core. logwatch is currently in discussion together with postfix alias as they are related. fail2ban isn't a good global fit, but interesting project that should be documented. same goes for bsd-mailx.

6. Fail2ban apache-access - see 5

7. Enable fail2ban jails - see 5

8. NTP per VMWare recommendations - done. I wasn't aware of tinker panic 0, excellent workaround. I also update the ntp pool servers.

9. Securing file permissions and ownership - its a drush issue. It could be worked around by using setuid/setgid on the containing folders. Regarding using the root account, we have had this discussion somwhere on the forum, I'll try find a link and post it for reference.

Again, thanks for all the feedback! If you have more, let us know.

John Carver's picture

Your Welcome, and thank you

Your Welcome, and thank you for taking the time to review the suggestions.  I've spent quite a bit of time the last couple of months working through the upgrade and migration process.  Just happy to share some of what I've learned.

BTW, I really appreciate all the work the developers have put into TurnKey Linux, it's a great product.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Re: Change the hostname

You might want to change the second line in /etc/hosts to drupal6.localdomain drupal6

I think that might be enough to keep Apache from complaining about missing FQDN every time it starts.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Suggestions (cont)

9. Securing file permissions and ownership

Whenever new modules or updates are downloaded by Drush, the file ownership and permissions are not changed leaving them owned by uid 6226 and readable by the world.  Drupal's Admin Guide recommends securing file permissions and ownership so they are owned by the webmaster and readonly by Apache with the exception of the /files directory which needs to be readwrite by Apache.  Unfortunately, the script provided there doesn't work on TKL Drupal because it doesn't follow symlinks and leaves the third-party modules unprotected.  Hopefully, some future version of Drush will enforce ownership and permissions rules during installation.  In the meantime, I'm working on a modified script that will recognize Drush's site-aliases and follow symlinks in a Debian/Ubuntu/TKL installation.

One question I have for the developers is about the advisibility of creating a separate account for the webmaster role.  Currently all work on the Drupal appliance is done through the root account.  Coming from a RedHat/CentOS background, I'm comfortable working in root and have only been badly burned a couple of times.  It took me awhile to get used to working with Ubuntu, but I came to respect the way it handled security.  Now I'm wondering if it wouldn't be a good idea to add a webmaster account and give it ownership of all the Drupal files.  You've probably already had this debate, but I just had to ask. smiley

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Flawed logic in cron.sh

Earlier in my testing, I encountered a situation where the Drupal cron.php was being run twice each hour.  I opened a bug report, but then withdrew it after it stopped and I convinced myself it was something I had done.  Yesterday I tripped over the problem again, and was forced to take another look at cron.sh and cron.php.  It began by noticing an unexpected number of e-mails being forwarded by Postfix.  Examining the mail.log showed that every hour the cron daemon was sending a message to the Apache server i.e. www-data@localhost.  That had me puzzled for awhile.  What would be causing cron to send a message, and why to the apache user, www-data, and not to root.  I checked /var/mail/www-data and sure enough there were hundreds of messages from cron.

In Intro to cron, I found the following quote regarding cron and /etc/crontab

MAILTO is who gets mailed the output of each command. If a command cron is 
running has output (e.g. status reports, or errors), cron will email the output 
to whoever is specified in this variable. If no one if specified, then the 
output will be mailed to the owner of the process that produced the output.

I checked the contents of each message and found it was the HTML generated from the Drupal install page.  Next I took a look at the contents of /etc/drupal/6/cron.sh to try and understand what was happening.

# $Id: cron.sh 1878 2008-02-12 10:56:45Z luigi $

for site in /etc/drupal/6/sites/* ; do

	if [ ! "`basename $site`" = "all" ]; then
		for file in $site/baseurl.php $site/settings.php; do
			[ -f "$file" ] && BASE_URL=`grep '^$base_url' $file | cut -d"'" -f2`
			[ "X$BASE_URL" != "X" ] && break

		if [ "X$BASE_URL" = "X" ] ; then

		curl --silent --compressed --location $BASE_URL/cron.php

Examining the code shows that cron.sh scans the directory /etc/drupal/6/sites and attempts to run cron.php on every legitimate site it finds, however, if it fails to find a baseurl.php or settings.php with $base_url set, it substitutes 'http://localhost' instead.  In my case, the error was caused by sites/default containing an uninitialized settings.php.  If 'localhost' was a site alias for my main site, the result would be that cron ran twice each hour.  In my case, Drupal found the uninitialized settings.php, assumed it had never been run before and launched install.php instead of cron.php.  The resulting output from curl was then e-mailed to www-data.  Note that the 'bad' behavior can be triggered any unused directory under /etc/drupal/6/sites/ even an empty directory.

I have racked my brain, but I can't think of a single reason why you would want to run cron.php on a site without a properly initialized settings.php.  Here is the modified version that I think makes more sense.

# $Id: cron.sh modified for TurnKey Drupal Appliance $

for site in /etc/drupal/6/sites/* ; do

	if [ ! "`basename $site`" = "all" ]; then
		for file in $site/baseurl.php $site/settings.php; do
			[ -f "$file" ] && BASE_URL=`grep '^$base_url' $file | cut -d"'" -f2`
			[ "X$BASE_URL" != "X" ] && break

		if [ "X$BASE_URL" != "X" ]; then
		        curl --silent --compressed --location $BASE_URL/cron.php

Note that it runs cron.php only if it finds $base_url set in either baseurl.php or settings.php.  I also changed /etc/crontab and added 'MAILTO=root' so that all future cron errors will be mailed to root (or the alias set in step 2) instead of ending up in an unread mailbox.

Information is free, knowledge is acquired, but wisdom is earned.

Alon Swartz's picture

Firstly, thanks for reporting the issue...

Firstly, thanks for reporting the issue, but there isn't a generic fix that I can think of - let me explain.

From what I understand, $base_url is often used when using multi-side configurations, so when drupal creates links to resources (files / images) is prepends the base_url to the resouce. If we would integrate your solution we would need to specify a base_url for the default site (e.g. http://localhost), but that would break existing code.

One solution does come to mind though, and that is to create an array in the cron script which stores each cron url it calls, and not call a url that has already been called. Though, this wouldn't really solve your specific issue.

Bottom line, I'd recommend either not creating a site folder if it's not being used, or tweak the cron script (it's in etc for that reason). I know this isn't a very good answer/solution, but the only one I currently can offer.

I'm still open to other solutions if you'd like to propose any.

BTW, the cron script is from the upstream packaging (debian), so if we do come up with a reasonable solution, I'd recommend submitting a patch upstream for inclusion in the official package.

John Carver's picture

It's probably not a problem

It's probably not a problem many people would encounter.  I tripped over it because of the way I migrated my existing websites to the server and never really using the default install.  It was an interesting exercise to figure out exactly what was happening and why.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Move CKeditor download

10. Move CKeditor to sites/all/libraries/ckeditor

Currently the CKeditor is downloaded to a subdirectory sites/all/modules/ckeditor/ckeditor.  This has the disadvantage of being overwritten when a new version of ckeditor is installed.  A Drupal best practice that I try to follow it to place all third party packages that can't be included because of licensing concerns into sites/all/libraries.  This includes the gmap software.  The CKeditor module will automatically find the javascript in sites/all/libraries/ckeditor.

Information is free, knowledge is acquired, but wisdom is earned.

Alon Swartz's picture


I didn't know about that, thanks for the recommendation - done (ckeditor).

What do you mean regarding gmap?

John Carver's picture


When you click on Regenerate marker cache, the Gmap module builds a custom file called gmap_markers.js and places it in the /files/js directory where it has read/write permissions.  It is my belief that for security reasons, the file needs to be made read-only and moved someplace where the Gmap module can find it.  In earlier versions of Gmap, you had to set a field with the location of the file.  I put mine in sites/all/libraries/gmap/ and it seems to find it automatically.  Also if you are using one of the third-party marker managers, the instructions say to download the marker manager and place it in the thirdparty folder.  I think that is code for the sites/all/libraries/ folder.

BTW, there is a huge amount of confusion between gmap_markers.js and gmap_marker.js which is part of the Gmap module.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

I stand corrected

Apparently I'm wrong about the gmap_markers.js location.  When using Drupal 5.xx, I kept gmap_markers.js in sites/all/libraries/gmap.  The behavior of the Gmap module appears to have changed with the upgrade to Drupal 6.xx and change from a private to public file system.  It seems I left a copy of gmap_markers.js in files/js and that was the copy being used, while I was assuming the module was automatically finding the copy in the third-party folder.  I'm still a tad nervous about leaving the file in files/js with read-write permissions so I'm changing it to read-only until I figure out the correct way to handle the situation.  I just have to remember to change the file permissions before regenerating the marker cache again.  Sigh!

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Rushing to get items posted for consideration

In the interest of saving time I'm going to post a finalized version of apache.conf for Drupal6.  Most of the suggested changes come from the series of articles;

Drupal and Apache Security Checklist Part 1Part 2Part 3

by Nadeau Software Consulting

# Moved to httpd.conf so drupal6 can be disabled without affecting other virtual hosts.
# NameVirtualHost *:80
# NameVirtualHost *:443

<VirtualHost *:80>
    UseCanonicalName Off
    ServerName drupal6
    ServerAdmin  webmaster@localhost
    DocumentRoot /usr/share/drupal6/

<VirtualHost *:443>
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/cert.pem
    ServerAdmin  webmaster@localhost
    DocumentRoot /usr/share/drupal6/

<DirectoryMatch "/usr/share/drupal6/(?!(.+/)+)">
# Block any file that starts with "."
    <FilesMatch "^\..*$">
        Order allow,deny

# Block all files with "." in their names
    <FilesMatch "^.*\..*$">
        Order allow,deny

# Allow "." files with safe content types
    <FilesMatch "^.*\.(css|html?|txt|js|xml|xsl|gif|ico|jpe?g|png)$">
        Order deny,allow
    <FilesMatch "^.*\.(f4a|f4b|f4p|f4v|flac|flv|mov|mp3|qt|swf)$">
        Order deny,allow
    <FilesMatch "^.*\.(docx?|pptx?|xlsx?|odt|ods|odp|pdf)$">
        Order deny,allow

# Allow from anywhere
    <FilesMatch "^(index|xmlrpc).php$">
        Order deny,allow

# Allow access to cron from server and local network only
    <FilesMatch "^cron.php$">
        Order allow,deny
        Allow from   # Localhost
        Allow from 192.168.    # Local network (use your local IP range)

# Allow access to install and update from server and local network only
    <FilesMatch "^(install|update).php$">
        Order allow,deny
        Allow from   # Localhost
        Allow from 192.168.    # Local network (use your local IP range)


<Directory /usr/share/drupal6/>
    Options +FollowSymLinks
    AllowOverride All
    order allow,deny
    allow from all

# Include rewrite rules here instead of .htaccess so it will be easier
# to disable .htaccess during performance tuning.
<IfModule mod_rewrite.c>
    RewriteEngine on

# If your site can be accessed both with and without the 'www.' prefix, you
# can use one of the following settings to redirect users to your preferred
# URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option:
# To redirect all users to access the site WITH the 'www.' prefix,
# (http://example.com/... will be redirected to http://www.example.com/...)
# adapt and uncomment the following:
# RewriteCond %{HTTP_HOST} ^example\.com$ [NC]
# RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301]
# To redirect all users to access the site WITHOUT the 'www.' prefix,
# (http://www.example.com/... will be redirected to http://example.com/...)
# uncomment and adapt the following:
# RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC]
# RewriteRule ^(.*)$ http://example.com/$1 [L,R=301]

# Return 404's for any access to CSS, JS, and images
# unless they come from your site's pages or a trusted host.
    RewriteCond %{REMOTE_ADDR}      !^$
    RewriteCond %{HTTP_REFERER}     !^https?://%{HTTP_HOST}/.*$ [NC]
    RewriteRule .(css|js|ico|gif|jpe?g|png)$        - [R=404]

# Return 404's for all Drupal .php, .inc, .module, and .info files
# if served to hosts other than the local network hosts, but allow
# /index.php and /xmlrpc.php.
    RewriteCond %{REMOTE_ADDR}      !^$
    RewriteCond %{REQUEST_URI}      ^.*\.(php|inc|module|info)$
    RewriteCond %{REQUEST_URI}      !^/(index|xmlrpc).php$
    RewriteRule .*  - [R=404]

# Rewrite index.php to /
    RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.php\ HTTP/
    RewriteRule ^(.*)index\.php$ /$1 [L,R=301]

# Force trailing / for directories
    RewriteCond    %{REQUEST_FILENAME}  -d
    RewriteRule    ^(.+[^/])$           $1/  [L,R=301]


The last two items were suggested by Drupal SEO: How Duplicate Content Hurts Drupal Sites and Duplicate Title Tags "/" and "/index.php"  

Speaking of SEO, you might want to consider adding Path Redirect and Global Redirect modules.  I found that both were necessary to deal with SEO issues related to PathAuto.

Alon,  Thanks for taking the time to consider all these changes.  I know there are a lot of them.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Install & Update

It just occurred to me that the access rules for install.php and update.php would prevent installing or updating an appliance remotely.  For general use, the rule would have to be modified to permit access from everywhere with a caveat to modify the rule after installation.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Additional Items

As I continue to review my notes and watch my server logs, I'm finding additional changes that I've made.

12. Increase php's 'memory_limit' from 32MB to 64MB

As more modules are added, Drush appears to require more memory for php scripts.
Change the value of 'memory_limit' in /etc/php5/cli/php.ini from 32MB to 64MB.  I choose to also increase the limit in /etc/php5/apache2/php.ini.

13. Move getid3 to third-party folder and fix Permissions

Even though the getID3 module is disabled, it appears to be accessed by admin resulting in errors in the logs.  The files in /usr/share/php-getid3 are owned by root and apache lacks read permission.

opendir(sites/all/modules/getid3/getid3/getid3): failed to open dir: Permission denied in /var/www/drupal/includes/file.inc on line 931.

Move the location of the getID3 third-party download to sites/all/libraries for consistency and so it doesn't get overwritten when the module is updated.

From the drupal root directory,

   # mv sites/all/modules/getid3/getid3 sites/all/libraries

Change ownership so apache has read access
   # chgrp -R www-data /usr/share/php-getid3

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

Late feedback on a couple of ideas

1) Regarding the default mail alias (root@localhost -> your-email):

I kinda like the idea in theory but I wonder how well it would work in practice as root sometimes gets weird e-mails (e.g., output any executed cron job) that wouldn't make sense to many users.

OTOH, everyone has an e-mail address and it makes sense that you would want to use that instead of digging around for the mails in the mail spool or use a cli mail reader, or setup IMAP/POP.

Of course this would only make sense on appliances that have a mail server. It wouldn't go into Core.

2) Regarding logwatch: I wouldn't put this into Core as this would mainly be useful for advanced users and advanced users can set that up themselves if they want. For many non experts this would just amount to more useless, non-actionable noise in their inbox.

The hallmark of a good interface is that users know in advance what to expect and what you do reasonably matches that. I'm concerned that optimizing for more advanced users would come at the expense of breaking the expectations of less advanced users who need our help more.

Post new comment

The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <p> <span> <div> <h1> <h2> <h3> <h4> <h5> <h6> <img> <map> <area> <hr> <br> <br /> <ul> <ol> <li> <dl> <dt> <dd> <table> <tr> <td> <em> <b> <u> <i> <strong> <font> <del> <ins> <sub> <sup> <quote> <blockquote> <pre> <address> <code> <cite> <strike> <caption>

More information about formatting options

Leave this field empty. It's part of a security mechanism.
(Dear spammers: moderators are notified of all new posts. Spam is deleted immediately)