problem with hubdns and remote backup

Hi, my hostname xxx.tklbam was working fine once the set up was completed. Then I moved the workstation to a different location resulting in a new ip address assigned to it. I did a hubdns-init and update and it was fine. I was in a hurry and I did not thought that it may take some time for the dns to be propagated, correct me if I am wrong here.... Now I went into my account and deleted it, create a new one and now I am getting some weird error on the shell line when I run hubdns-update. This is what I am getting : 

File "/volatile/hub/apps/domain/api/", line 44, in wrapper
self.user = User.objects.get(id=subkey.uid)
File "/usr/local/lib/python2.5/site-packages/django/db/models/", line 132, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/local/lib/python2.5/site-packages/django/db/models/", line 339, in get
% self.model._meta.object_name)
DoesNotExist: User matching query does not exist.
So far by that line, I guess it is something to do with hubdns and API key conflict somewhere, but how do I fix this ? Anyone can help?
Alon Swartz's picture

You mentioned "I went into my account and deleted it" - If you deleted your Hub account then the error makes sense, the API subkey being used on your server doesn't match a user in the Hub. You will need to re-initialize hubdns with your new user account's APIKEY.

What I don't understand though is why you deleted your Hub account? Or are you referring to deleting something else...

Hi Alon, thanks for getting back to me... I moved the pc to a new address and this is the 1st time I am deploying such a system and I am still playing around with it.. Now I did hubdns-update and it seems to have worked. When I tried to access it via a browser it did not work and I thought it would be quicker if I deleted the account and create one again. Now I think I should have waited for the dns to propagate before deleting the account on my hub account... I have got a new API key, how do I reallocate this using shell ?Any idea how do I reinitialize or re-install the whole thing ? Thanks a lot for getting back to me.

Alon Swartz's picture

Just pass --force to hubdns when you initialize it with your new accounts APIKEY:

hubdns-init --force APIKEY

For more info, check out the docs.

I tried what you have mentioned and I get APIKEY does not exist... Is there something I have missed ?

I have a new api key since I created a new account...Thanks for reading.

Alon Swartz's picture

Are you substituting APIKEY for the API Key displayed on your Hub profile page? ?

I have deleted my account and created a new one and it also seems that I can't assign this new API key to it. I have tried to remove hubdns and tklbam and reinstall them, hoping that it will fix my problem but got no luck (just know I did it since I got a message that it will clear xxx amount of space.)

Can you help? I do not want to start a fresh install and I would rather try to fix the problem. When you mention hubdns-init --force APIKEY ? Here do you also paste your APIKEY values?

I also need to re-assign a new host name since it is not showing up on the hubdns control website for some reasons, can you give me a step by step guide on how to fix this problem? Thanks.

Thanks Alon, it looks like it has worked and the API key has been associated to my account. I think you should post the solution on this forum so it can help others like me in the future. I guess I will have to wait for dns to propagate over the net, am I right here? So that the file server can be accessed via ?

Hi all, it's all working fine for me now and the file server is up and running, accessible via a url or ftp etc....

Now I have a file server runnning on a remote location where I would like to have automated back up perfomed on a regular interval. I have used filezilla and it is now almost 3 days that the files are still being downloaded from the TKL remote server. I tried using DLINK dns 325 to connect to the remote TKL server but I get a message as such : "RSYNC Test Result : ( SSH Refused )" Anyone can help ? I have read somewhere that I have to allow SSH connection from my ip address in an /etc.... file somewhere. 

Any comment would be greatly appreciated...thanks.

Jeremy Davis's picture

TKLBAM will take care of your automated backup requirements.

As for your other issues. TBH I'm not really clear what the actual problem is? You can currently connect via SFTP but not SSH? If so then I would assume that it is some config issue on your local end as SFTP and SSH are both provided by the OpenSSH  apckage and both use the same port (22). In fact the S in SFTP means that it's FTP over SSH!

Sorry I don't really have any clue what could be wrong.

Thanks for getting back... I will try to fiddle around as I need to set up a remote automated back up...but your last reply does not seem to help. I am aware I am using SFTP but I dont understand why I get a resynchronization refused message... Any idea if there is any config file to edit or not ? 

Jeremy Davis's picture

So you are trying to connect to your remote TKL instance from your Dlink NAS box type thing? (I assume that's what the Dlink thing is). Are you also connecting to SFTP from that too? If you are connecting to SFTP then you ARE connecting to SSH so if one works then the other should too! I would assume that it is an issue with your Dlink box of some sort...

Also TKLBAM can use remote (non-S3) locations for backup (have a look at the docs). Depending on what files you are backing up that may make a significant improvement on backup time (as TKLBAM compresses files).

Hi Jeremy, thanks for getting back to me...I managed to back up using filezilla [sftp] but that took almost 4 days.. Question is, to save me time to go through the manual, is there a way to set up an automated remote back up on a regular basis? You are right about the NAS box, it's a dlink dns 325...but unfortunately I get a connection refused msg... Either way round, DLINK or TKL OS, I need to set up some kind of remote automated back up, any help would be great.

Jeremy Davis's picture

TBH default TKLBAM is just too easy and it's pretty cheap (US$0.14/GB/mth) unless you have a huge amount of data. S3 storage is also more reliable than any physical HDD so your data is also more secure (in the dataloss sense) and if you use an escrow key (I don't bother) then it is also very secure (in the sense of others having access to your data).

The other fantastic thing about using S3 storage IMO is that if you have a hardware failure or lose internet connection (to/from your local server) assuming you can access AWS (eg via a smartphone) you can launch an instance in the cloud and have your site backup within minutes!

Anyway, even if you are determined to not use S3 you can still use TKLBAM (for free) using an alternative storage location. It's not as handy, user-friendly and easy as default usage and it doesn't keep track of your backups (so it makes restores more complicated too). But it does work.

To use a non-S3 backup target you'll want to have a look at the TKLBAM FAQ, this point specifically (although I'd suggest you have a quick read through the full FAQ as it has some other important info so you understand what you're doing). To use your NAS box as the remote target you'll still need to sort out your SSH issues (because that'd be the desirable way to transit your data over the net).

Like I said using TKLBAM (even using a non-S3 location) will still be quicker than copying as by default TKLBAM compresses your data. Obviously though your internet connection speed will be the bottleneck: Your upload speed where your server is located assuming it is using only a 'standard' type connection (usually upload speeds are very slow on 'standard' type internet connections), or probably the download speed where your NAS is you server is hosted somewhere like Amazon (comercial datawarehouses usually have huge bandwidth).

Post new comment