sudo service httpd status
http://julien.danjou.info/blog/2014/db-integration-testing-strategies-python
http://www.reddit.com/r/javascript/comments/2lyd02/eli5_bower_grunt_gulp_npm/
Grafana is not a time series store or metric fetching agent. It s dashboard and graph composer that currently support Graphite, InfluxDB and OpenTSDB (and KariosDB via plugin).
So if you have metrics in one of those time series stores then Grafana is a really awesome tool for visualizing those metrics.
Grafana is all about maximizing the power and ease of use of the underlying time series store so the user can focus on building informative and nice looking dashboards. It is also about letting users define generic dashboards through variables that can be used in metric queries, this allows users to reuse the same dashboard for different servers, apps or experiments as long as the metric naming follows a consistent pattern.
Grafana also uses Elasticsearch but not for log analytics, but for annotating graphs with event/log information.
yeoman
https://github.com/romainberger/yeoman-flask
Fabric
http://habrahabr.ru/post/214259/
NPM
http://habrahabr.ru/post/243335/
buildout
https://pypi.python.org/pypi/zc.buildout/2.2.1
http://blip.tv/pycon-us-videos-2009-2010-2011/pycon-2011-deploying-applications-with-zc-buildout-4897770
http://www.buildout.org/en/latest/
Grunt - depends from node.js
инструмент для сборки javascript проектов из командной строки с использованием задач
Грант позволяет автоматизировать склеивание и минификацию js-файлов, запуск тестов, проверку кода с помощью JSHint и многое другое.
http://jonsuh.com/blog/get-started-with-grunt/
http://thanpol.as/grunt/Managing-large-scale-projects-with-Grunt/
http://24ways.org/2013/grunt-is-not-weird-and-hard/
http://www.html5rocks.com/en/tutorials/tooling/supercharging-your-gruntfile/
http://quickleft.com/blog/grunt-js-tips-tricks
http://frontender.info/grunt-is-not-weird-and-hard/
http://habrahabr.ru/post/170937/
http://habrahabr.ru/post/148274/
http://habrahabr.ru/company/dnevnik_ru/blog/181352/
http://flippinawesome.org/2013/09/23/automating-complex-workflows-with-grunt-custom-tasks/
https://github.com/Anonyfox/node-webkit-hipster-seed
https://news.ycombinator.com/item?id=7094465
https://github.com/louischatriot/nedb
A typical Grunt project consists of two files: package.json and Gruntfile.js.
package.json - metadata for projects as well as listing Grunt and its necessary plugins your project needs.
Gruntfile.js - configuration that loads and defines tasks.
Вместе с node.js установится и npm – пакетный менеджер, с помощью которого мы установим и Grunt, и все необходимое для его работы. Для начала создадим в проекте файл package.json, в который запишем название нашего проекта, его версию, зависимости и версию node.js
./abe-application-api/public/bower_components/bootstrap/Gruntfile.js
./abe-application-client/Gruntfile.js
./abe-application-main/public/bootstrap/Gruntfile.js
./abe.www/public/bootstrap/Gruntfile.js
Also look into package.json is corresponding folder
Grunt устанавливается как NPM (Node Package Manager) модуль. Если у вас не установлены node.js и npm, то вам нужно установить их. Сделать это можно с официального сайта node.js или, если у вас Mac, используйте homebrew. Затем вам нужно установить npm — менеджер пакетов для node (вы можете провести параллель между npm и ruby gems). Обратите внимание, что если вы устанавливаете node.js с официального сайта, то npm идет в комплекте. Отдельно устанавливать npm нужно только если вы собирали node.js из исходников или использовали homebrew.
Непосредственно установка Grunt выполняется простой командой npm install -g grunt. Флаг -gозначает установку глобально, то есть Grunt будет доступен всегда из командной строки, так как установлен в корневую папку node_modules. Если вы хотите запускать Grunt только в определенной папке, то находясь в ней, выполните ту же команду без флага -g.
Composer - dependency manager for PHP
http://habrahabr.ru/post/145946/
http://habrahabr.ru/post/258891/
https://blog.engineyard.com/2014/composer-its-all-about-the-lock-file
http://devedge.wordpress.com/2014/11/05/building-better-project-skeletons-with-composer-2/
composer.json
"minimum-stability": "dev",
"require": {
"php": ">=5.4",
"zendframework/zendframework": "dev-develop",
"rwoverdijk/assetmanager": "~1.3", https://github.com/rwoverdijk/assetmanager
"zfcampus/zf-apigility": "~1.0-dev", https://github.com/zfcampus/zf-apigility
"zfcampus/zf-configuration": "~1.0-dev", https://github.com/zfcampus/zf-configuration
"zfcampus/zf-apigility-doctrine": "dev-master", https://github.com/soliantconsulting/zf-apigility-doctrine
"zfr/zfr-cors": "1.1.*", https://github.com/zf-fr/zfr-cors
"doctrine/doctrine-mongo-odm-module": "0.8.*@dev", https://github.com/doctrine/DoctrineMongoODMModule
"soliantconsulting/toolbox": "dev-master",
"soliantconsulting/apple-connect": "dev-master",
"soliantconsulting/toolbox": "dev-master",
"soliantconsulting/apple-connect": "dev-master",
"soliantconsulting/soliantconsulting-module-attachment": "dev-master",
"soliantconsulting/abe-module-dbloadcd": "dev-master",
"soliantconsulting/abe-module-dbadminlog": "dev-master",
"videlalvaro/php-amqplib": "dev-master",
"hounddog/doctrine-data-fixture-module": "dev-master",
"heartsentwined/zf2-cron": "2.*" https://github.com/heartsentwined/zf2-cron
},
"require-dev": {
"zendframework/zftool": "dev-master",
"zendframework/zend-developer-tools": "dev-master",
"zfcampus/zf-apigility-admin": "dev-master",
"fzaninotto/faker" : "dev-master", https://github.com/fzaninotto/Faker test data generator
"pdepend/pdepend": "1.1.0" https://github.com/pdepend/pdepend
}
Bower.io - package manager for front-end (mostly JS)
http://nano.sapegin.ru/all/bower
http://mwop.net/blog/2013-12-03-bower-primer.html
http://techportal.inviqa.com/2014/01/29/manage-project-dependencies-with-bower-and-composer/
Example: bower.json
{
"name": "Application NAme",
"version": "0.1.0",
"main": "app/index.html",
"ignore": [
"**/.*",
"node_modules",
"components",
"dist",
"build"
],
"dependencies": {
"angular": "1.2.4",
"angular-resource": "1.2.4",
"angular-mocks": "1.2.4",
"angular-scenario": "1.2.4",
"angular-animate": "1.2.4",
"jasmine-matchers": "https://github.com/JamieMason/Jasmine-Matchers.git",
"requirejs": "latest",
"requirejs-domready": "latest",
"requirejs-text": "latest",
"requirejs-plugins": "latest",
"bootstrap": "3.0.3",
"jquery": "~1.10",
"less": "~1.5.1",
"angular-route": "~1.2.3",
"modernizr": "~2.7.1",
"ng-grid": "~2.0.7",
"restangular": "~1.2.0",
"font-awesome": "~4.0.3",
"angular-ui-router": "0.2.*",
"angular-ui-bootstrap": "bootstrap3",
"moment": "2.3.1",
"angular-ui-select2": "~0.0.4",
"angular-bootstrap": "~0.7.0",
"eonasdan-bootstrap-datetimepicker": "26fae21deda3e8e8061b39de2319871af9168d8c"
},
"resolutions": {
"angular": "1.2.4",
"eonasdan-bootstrap-datetimepicker": "master"
}
}
Apigility
http://apigility.org
http://techportal.inviqa.com/2013/12/03/create-a-restful-api-with-apigility/
Fabric
http://habrahabr.ru/company/aori/blog/215601/ Fabric etc
http://empirewindrush.com/tech/2014/04/14/intro-to-fabric/
http://docs.python-guide.org/en/latest/scenarios/admin/
http://www.mattmakai.com/static/presentations/2014-cos-ansible.html#/
Salt
https://missingm.co/2013/06/ansible-and-salt-a-detailed-comparison/
Ansible
https://habrahabr.ru/post/305400/
https://habrahabr.ru/post/306998/
https://habrahabr.ru/company/centosadmin/blog/304814/
http://habrahabr.ru/company/infobox/blog/249143/
http://habrahabr.ru/company/infobox/blog/250115/
https://ru.hexlet.io/courses/ansible?utm_medium=blog&utm_source=habr&utm_campaign=new_courses
http://www.ansibleworks.com/docs/intro_patterns.html
https://serversforhackers.com/editions/2014/08/26/getting-started-with-ansible/
http://www.ansible.com/resources
http://blog.versioneye.com/2014/07/03/intro-to-ansible/
http://habrahabr.ru/post/217689/
http://engineering.waveapps.com/post/80595462671/an-ansible-primer
http://www.stavros.io/posts/example-provisioning-and-deployment-ansible/
http://devo.ps/blog/2013/07/03/ansible-simply-kicks-ass.html
./ansible -K --limit production provision.yml
http://habrahabr.ru/company/selectel/blog/196620/
http://habrahabr.ru/post/195048/
Ansible берет на себя всю работу по приведению удаленных серверов в необходимое состояние. Администратору необходимо лишь описать, как достичь этого состояния с помощью так называемых сценариев (playbooks; это аналог рецептов в Chef). Такая технология позволяет очень быстро осуществлять переконфигурирование системы: достаточно всего лишь добавить несколько новых строк в сценарий.
файл — /etc/ansible/hosts, но оно может также быть задано параметром окружения $ANSIBLE_HOSTS или параметром -i при запуске ansible и ansible-playbook.
In general terms, you'll deploy Ansible with a central server and with configured groups of clients to be managed, using hostnames in an Ansible hosts file. The configuration required on the managed hosts is minimal, requiring only a functional Python 2.4 or 2.6 build and configuration of SSH authorized_keys to allow for connections from the Ansible master server to each host.
There are many ways to go about this, as you can configure Ansible to connect to hosts as a certain user or as the user running the commands on the master server. You can go with root as the user, but many will prefer to connect using a normal user account and working with sudo on the target to run commands as root.
As an example, we might have a user named "ansible" on our master server; we would then add an "ansible" user to our managed hosts and give that user passwordless sudo capabilities. Alternatively, we could specify passwords for sudo on the hosts, or we could specify a different username to be utilized when connecting. The ultimate goal is to allow the Ansible control executable to be able to connect to each configured host via SSH and run commands. That's all there is to Ansible master and client configuration.
How Ansible works
Once the basic setup of our master and managed hosts has been done, we can start looking at what Ansible can do. Here's a simple example:
[ansible@ansible1: ~]$ ansible all -m ping -u ansible -sudo
ansiblecentos.iwlabs.net | success >> {
"changed": false,
"ping": "pong"}
ansibleubuntu.iwlabs.net | success >> {
"changed": false,
"ping": "pong"
}
Here we ran a simple ping command to make sure that our managed hosts are configured and answering. The result is that both targeted hosts are ready and waiting.
Now, we can run a few other commands to investigate further:
[ansible@ansible1: ~]$ ansible all -m copy -a "src=/etc/myconf.conf dest=/etc/myconf.conf" -u ansible -sudo
ansiblecentos.iwlabs.net | success >> {
"changed": true,
"dest": "/etc/myconf.conf",
"gid": 500,
"group": "ansible",
"md5sum": "e47397c0881a57e89fcf5710fb98e566",
"mode": "0664",
"owner": "ansible",
"size": 200,
"src": "/home/ansible/.ansible/tmp/ansible-1379430225.64-138485144147818/source",
"state": "file",
"uid": 500
}
ansibleubuntu.iwlabs.net | success >> {
"changed": true,
"gid": 1000,
"group": "ansible",
"mode": "0664",
"owner": "ansible",
"path": "/etc/myconf.conf",
"size": 200,
"state": "file",
"uid": 1000
}
As you can see, these commands will cause the file /etc/myconf.conf to be copied to our two managed hosts. We will also get a JSON object returned with data on the copy, file ownership, and so forth. We can specify alternate ownership, permissions, and other variables on the command line as well.
We can also do things like make sure that a service is set to start at boot:
[ansible@ansible1: ~]$ ansible webservers -m service -a "name=httpd state=running" u ansible -sudo
Or we can reboot those hosts:
[ansible@ansible1: ~]$ ansible webservers -m command -a "/sbin/reboot -t now"
Or we can pull an inventory of each client:
[ansible@ansible1: ~]$ ansible all -m setup -u ansible --sudo
That last command will output JSON objects describing each client, including total RAM, used RAM, CPU, network, and disk information, the OS version, kernel version, and so forth.
As you can see, Ansible provides a way to execute commands, gather data, and copy files to the targets, based on command-line parameters.
This functionality, by itself, could be done with only SSH and some scripting. Executing commands via SSH on remote hosts is as old as the hills, after all. What Ansible adds is the ability to make all that happen with a shorthand parameter set, along with grouping, inventory, and other higher-level management of the hosts. Each Ansible command-line function offers many options, such as the ability to reference multiple groups or to run the commands on a subset, such as only the first 50 servers in a group.
These capabilities will be instantly usable by Unix admins, and working with Ansible's tools to script up quick and simple automation and orchestration tasks is ultimately very easy. Further, we can also build Playbooks to collect sets of commands and tasks for simple management.
Ansible Playbooks
Playbooks are constructed using YAML syntax, so they are generally easily readable and configurable. For instance, this simple Playbook will make sure that NTPD is running on all hosts, using the "ansible" user and sudo to connect:
---
- hosts: all
remote_user: ansible
tasks:
- service: name=ntpd state=started
sudo: yes
We can also use Playbooks to do file copies. This is the Playbook version of the file copy noted above, but specifying the owner and permissions of the file on the client:
---
- hosts: all
remote_user: ansible
tasks:
- name: Copy file to client
copy: src=/etc/myconf.conf dest=/etc/myconf.conf
owner=root group=root mode=0644
We can also use variables in Playbooks:
---
- hosts: webservers
remote_user: root
vars:
ntp_service: 'ntpd'
tasks:
- service: name={{ ntp_service }} state=started
sudo: yes
Beyond these examples is the use of templates. We can build templates that reference variables, then call those templates from within Playbooks to construct files as we require. We might create a template file for an Apache configuration and place that configuration on our clients using variables specified in the Playbook:
template: src=/srv/templates/apache.conf dest=/etc/httpd/conf.d/{{ vhost }}.conf
Of course, we may need to restart services afterward, and we can do that with the notify and handler functions:
notify:
restart apache
handlers:
- name: restart apache
service: name=apache state=restarted
The combination of all of these commands in a Playbook would make sure the appropriate virtual host configuration file is in place on the client, then restart Apache afterward so that the configuration changes will be picked up.
As you might expect, we can include files in Playbooks. We could create a file with all of our necessary handlers, then include just that file in new Playbooks. Thus, we could keep all those handlers configured in one place and still make them available throughout all Playbooks.
Further, you can configure roles that allow for collections of handlers, tasks, and variables to be included in Playbooks that reference those roles. For instance, you might have a set of handlers and tasks just for database servers, so you would set up a database role containing those files, then add the role to a Playbook to have all of those elements included in the Playbook. You can also configure dependencies that reference other roles as required.
Thus, constructing Playbooks is not only straightforward, but also offers significant extensibility and natural organization. In addition, Playbooks are very simple to run:
[ansible@ansible1: ~]$ ansible-playbook myplaybook.yml -f 10
This command will run the Playbook myplaybook.yml with a parallelization of 10, meaning that the server will connect and run myplaybook.yml on 10 clients at once.
While Ansible uses paramiko, a Python SSH2 implementation, or native SSH to communicate with clients, there can be a scalability issue when moving into large numbers of clients. To address this, Ansible 1.3 offers an accelerate mode that launches a daemon over SSH that provides AES-encrypted communication directly with the client. This feature can speed up client communications substantially when measured in large-scale implementations as compared to paramiko or native SSH.
Ansible modules
Ansible includes a number of modules that allow for extended functionality, such as configuration and management of cloud services (say, Amazon EC2), as well as service-specific modules for popular database servers, file operations, and network devices. You can also create your own modules to handle site-specific requirements. Modules can be written in nearly any language, not just Python, so you could use Perl or Bash or C++ to create your modules.
Modules can be written to accept variables, and they are required to output JSON objects noting the status of the command along with any pertinent information that may be collected during runtime.
ZODB
http://www.zodb.org/en/latest/
https://pypi.python.org/pypi/ZODB/4.0.0
http://en.wikipedia.org/wiki/Zope_Object_Database
http://plope.com/Members/chrism/why_i_like_zodb
http://www.slideshare.net/carlos.delaguardia/zodb-tips-and-tricks
http://www.ibm.com/developerworks/aix/library/au-zodb/
https://github.com/cguardia/ZODB-Documentation
http://blog.startifact.com/posts/my-exit-from-zope.html
http://zodbdocs.blogspot.com/p/book-outline.html
http://blog.startifact.com/posts/older/a-misconception-about-the-zodb.html
http://www.fprimex.com/coding/zodb.html
http://www.reddit.com/r/Python/comments/1gpr4u/zodb_actively_maintained_python_3_beta_available/
Why and when would you pick ZODB instead of other solutions?
Major advantages: Clean integration of regular object with ACID transactions. Transparent references between objects without the need for reference swizzling.
Relationional databases with or without an ORM in front of them expose you to the infamous object-relational impedance mismatch.
Document databases or key value stores are great for many use cases, but for persisting whole object graphs you need to come up with a way of representing references between objects. ZODB does this transparently. There are certainly cases where a graph database will be more appropriate. ZODB is really about persistent Python object graphs, whereas graph databases can be used for graphs of any kind which don't necessarily correspond to objects in your program
ZODB is in many regards a logical equivalent to mongoDB but it stores pickled python objects instead of BSON. The technical differences between the two can be derived from that. It's dead simple to write for, but I can imagine a number of typical use-cases where mongoDB would be faster in production.
https://pypi.python.org/pypi/eye/1.0 ZODB browser
https://pypi.python.org/pypi/zodbbrowser ZODB browser
ZEO
https://pypi.python.org/pypi/ZEO
http://www.ztfy.org/++lang++en/installation/zeo/zeo.html
http://community.webfaction.com/questions/7925/how-to-setup-a-zeo-server