I am hiring Redhat Support Engineers for a DevOps Prod Environment
Experience : 3-4 years only
Candidate should posses following skills in Prod Support Env
Redhat Admin Skills (LVM, FHS, troubleshoot,Process, Networking etc)
Apache HTTPD Server admin skills
Apache Tomcat Server admin skills
ITIL - Service Management focal env exposure is required
Cloud Ops skills are desired
Puppet Ops skills are desired
Job location: *Chennai, INDIA*
Email your resume: ganesh.hariharan(a)sysopminds.com
I could share more information offline, thx
Hello,
Date: 2nd November, 2014 (Sunday)
Time: 10:00 to 19:00 IST
Venue: Directiplex, near Andheri Subway, Andheri (E), Mumbai, India
RSVP: http://meetu.ps/2CdMn0
Entry is free of charge and anyone can attend.
We are having a Ubuntu 14.10 Release Party, followed by Docker HackDay
and Meteor Day.
Apologies for the late announcement.
Hope to see some of you there!
Best Regards,
Rigved
Hello,
I need to periodically sync the data to raspberry pi from Windows Azure.
I can't use rsync as the data is on Windows server.
Currently I am syncing the data by parsing the JSON API
A python script extracts the links of physical files and then
downloads the data. Script is scheduled to run using cron job.
Is there any other way to Pull and Push data from Raspberry Pi to Server ?
--
Rahul.R.Mahale.
http://rahulmahale.wordpress.com/
Hello All,
We are trying to identify the right approach to do Fail-Over and Load
Balancing in Linux server (Centos7), which contains Mariadb(mysql) and
web-server (nginx) with some services such as Video On Demand, Games, etc.
(streaming through Intranet, not Internet)
Please guide, to select the right technology/solution (Ofcource OpenSource)
to do so. Its seems complicated for me because of the dynamic data (eg: If
a user watching video and fail over happens, he should get seamless
streaming without interruption).
I have considered HA-Cluster (Heartbeat) for Failover and HA-Proxy for load
balancing, but still not sure, how to handle dynamic data fail over.
--
Siji Sunny
A few days ago, I upgraded LO to 4.3.2.2 (amd64) on Linux and Mac OS X.
In writer, while editing the doc with imported images (PNG), it looses
some of the images (random) and shows a box with "Read Error" in top
left corner.
After closing the doc and reopening it, all traces of that image is gone!
Seeing this problem on both Debian / Wheezy (amd64) as well OS X desktops.
Anyone else experiencing this anomalous behavior?
-- Arun Khan
Hello,
I had set-up an proxy server on my raspberry pi.
Raspberry Pi is a wifi hotspot running a dhcp on a subnet 192.168.5.1-255.
all my LAN traffic goes through transperent squid proxy.
I have one website hosted on Pi with Apache.
I used bind9 to bind a local DNS with a website domain
now the local website is available through LAN with abc.co.in domain
but I want this site to be excluded from access.log or I want it to
completely bypass the transparent squid proxy server.
I tried following rules in squid3 but no success:-
acl abclan dstdomain .abc.co.in
http_access allow abclan
always_direct allow abclan
and this
acl abclan dstdomain .abc.co.in
cache deny abclan
cache allow all
but none worked and still access.log shows the abc.co.in entries.
any suggestions to achieve this scenario ?
Thanks!
--
Rahul.R.Mahale.
http://rahulmahale.wordpress.com/
Hi All
I have setup a Centos7 host server with KVM and have
created multiple VM running Ubuntu 14.04 using virt-manager. All VMs were
having forward mode as "NAT" and using internal DHCP IP address
192.168.122.x series and gave them fixed ips based on their mac
address. Now all these VMs are able to access Internet and able to run
update on all these guest OS.
But no server or devices on LAN / Internet is able to access these VM and I am running web and API servers on VMs. And as they have to be part of the Host servers IP pool, 192.168.1.x/24 network, I changed our virsh network settings from default to
routed as <forward mode='route' dev='em1'> and struggling to
configure fixed IPs for our VMs i.e Host 192.168.1.11, Web Server 192.168.1.12, API server 192.168.1.13 .... etc. Would like to do exactly as
mentioned in this link https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht…
Any help would be appreciated. Thanks.
Regards
Joel Divekar
Mumbai, India
Mob : +91 9920208223
Blog : http://joeldivekar.blogspot.com/
Linkedin : http://www.linkedin.com/in/joeldivekar
Slideshare : http://www.slideshare.net/JoelDivekar
Hello
I have some success to my problem but their are few concerns,
What I did :-
1.Installed Raspbian on Pi(B+).
2.made it as a wifi hotspot using hostapd and udhcpd.
3.installed the squid3 proxy server(apt-get install squid3).
4.managed the routing through Iptables to go all traffic through proxy.
5.made the proxy transperent.
6.wrote an python script to analyse the squid log (access.log)
7.the log analyzer counts the bytes used by each ip and dumps it in
sqlite db.
8.the wifi clients who exceeds the limit are put in to one acl file
blockedip.
9.from squid.conf i use the blockedip acl list to restrict the access.
10.after setting the cron job i am able to block clients after particular
data usage.
My Concerns:-
1.I am not able to bypass the one domain abc.com from proxy which should
not appear in access.log
2. How can I increase the size of access.log to maintain the logs for
more time as Pi has low storage.
Any suggestions ?
Regards:-
--
Rahul.R.Mahale.
http://rahulmahale.wordpress.com/
Hi,
Has anybody here tried getting Ncomputing L230 thin client to work with
Ubuntu LTS? Though the Ncomputing website claims support for Ubuntu, and
also provides a guide on configuring it, I have just not been able to get
the thin client to connect to it. There are a number of guides floating
around with slight variations, but none of them have worked for me so far.
Regards,
Ninad Gupte