Quantcast

New Agent Automation with REST API

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

New Agent Automation with REST API

bigmyx
Greetings,
I wonder what is the best way to automate the new agents rollout via REST API.
We have a configuration management system (Chef) which can install Agent package on the new machine.
As I understood, we will need to modify the model to make this agent be a part of the deployment Plan in the given Factory (not sure I am correct in the terminology).
My question of what API methods should I use after I configure, install and run the new Agent ?
Can you point me to some examples ?
Thanks !
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

frenchyan
Administrator
One thing that is important is for the agent to know in which fabric it belongs. This can be done in many ways (including when starting the agent on the command line, or as a shell env property in the pre-master-conf.sh file, or read from ZooKeeper).

Solution 3 is the default one and if you use the meta model to configure glu, then adding the new agent to the meta model and rerunning the setup (http://pongasoft.github.io/glu/docs/latest/html/setup-tool.html#generating-the-distributions-d) will generate a new config for ZooKeeper that can simply be uploaded (http://pongasoft.github.io/glu/docs/latest/html/setup-tool.html#configure-zookeeper-clusters-z)

Note that this can be shortcut since the only thing that really matters is for the agent to read its fabric in ZooKeeper so if you write the proper data in ZooKeeper yourself then it is the same (and it is not hard). If you use Chef I could imagine you could automate this pretty easily (you can use zk-cli to write in ZooKeeper for example). I would suggest running the setup manually (-D) with a new agent added to the meta model and look at the data added to the distribution folder for the ZooKeeper part. It reflects what needs to go in ZooKeeper.

Once the agent fabric has been defined, then starting the agent will automatically make it part of glu and be available like all the other agents. I am not sure how or what you deploy on your agents, but then it is just a matter of assigning entries in the static model (what is demonstrated in the tutorial) and deploying it. There is really nothing special at this moment whether it is a new agent or an already existing one for glu.

Yan
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

bigmyx
Hi Yan,
Nice to meet you !
Thanks for pointing on the ZK config prior adding a new agent. (it absolutely makes sense).

Here is the second part of my question:
I am deploying a Java apps (in form of WAR + Jetty), so I can use your groovy template for Jetty (thanks for preparing that :) )

I want to treat the Fabric as an Environment, like Dev, Prod, etc.
And Static Model as an App, like AuthService, NotifyService, etc.

I am not sure I correct in this correlation, please correct me if I am wrong.

After I run the Agent (with the correct Factory), I need to tell a specific Static Model to include this Agent.
Can it be done via single rest API call, or I just need to manipulate an existing Static Model to add the new Agent and it's attributes to it ?

I am trying to write some code which will run on the client up-on its bootstrap process.
The code will do the following:
 
- Download the Model and save it as a JSON object
- Create a new Agent Entry in the JSON object
- Upload the new JSON as a new version of the Static Model

I am not sure if this is the best approach ...
What would you do in this case ?

Thanks a lot !

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

frenchyan
Administrator
Hello

In regards to Fabric, the way you are planning to use them make sense. In glu, this is the highest level of grouping: one agent belongs to one and only one fabric and as a result a fabric is made up of all the agent belonging to it. This also means that a fabric is the highest level of orchestration: you can say "deploy all on one fabric" and glu will do it. So if 2 agents are not in the same fabric, this will require 2 separate commands. Also due to the fact that a fabric is tied to a ZooKeeper cluster, a fabric need to represent agents "close" to each other (cannot span data center for example).

There is no way and api call to modify parts of the model. It is an all or nothing. So to add an agent to it, you do need to modify the whole model and upload it again. So what you plan to do makes sense, that being said a couple of points:

* I am not too sure what you mean by "the code will run on the client" as I do not know what is client, but because of the logic (download, modify, upload), you could end up in a race condition  if 2 "clients" do it at the same time and one would win...

* you have to think where you want your source of truth to be. I would actually recommend to not be glu. I would actually suggest storing your model under version control: that way each client would check out the model, modify it and check it back in (at which time, you can always check for conflicts/make sure it has not been modified between checkout and checkin).

* there has been many threads on the topic (you can do a search), but in general it is not recommended to use the glu representation model as your own model: you should define your own model in your own way and then run a little tool which converts it into json OR

* another thing to consider, is that the model can be written in "json groovy" (http://pongasoft.github.io/glu/docs/latest/html/static-model.html#json-groovy-dsl) which is definitely the recommended way to write your model as it is a lot easier to read and write (and you can do iterations, conditions, etc...). But once uploaded in glu, the groovy part is gone, so when you download it from glu again, you only get the json. This is why it is recommended to store it outside of glu. With this pattern, you could imagine that your list of agents is stored on a server or whatnot and the json groovy simply loads it up.

Hope this helps
Yan
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

bigmyx
Yan,
This is very helpful.
Now I have a clear idea on which direction to follow.
I will research on the rest.
Thank you !

Sent from Above


On Sun, Jun 29, 2014 at 6:48 PM, frenchyan [via glu] <[hidden email]> wrote:

Hello

In regards to Fabric, the way you are planning to use them make sense. In glu, this is the highest level of grouping: one agent belongs to one and only one fabric and as a result a fabric is made up of all the agent belonging to it. This also means that a fabric is the highest level of orchestration: you can say "deploy all on one fabric" and glu will do it. So if 2 agents are not in the same fabric, this will require 2 separate commands. Also due to the fact that a fabric is tied to a ZooKeeper cluster, a fabric need to represent agents "close" to each other (cannot span data center for example).

There is no way and api call to modify parts of the model. It is an all or nothing. So to add an agent to it, you do need to modify the whole model and upload it again. So what you plan to do makes sense, that being said a couple of points:

* I am not too sure what you mean by "the code will run on the client" as I do not know what is client, but because of the logic (download, modify, upload), you could end up in a race condition  if 2 "clients" do it at the same time and one would win...

* you have to think where you want your source of truth to be. I would actually recommend to not be glu. I would actually suggest storing your model under version control: that way each client would check out the model, modify it and check it back in (at which time, you can always check for conflicts/make sure it has not been modified between checkout and checkin).

* there has been many threads on the topic (you can do a search), but in general it is not recommended to use the glu representation model as your own model: you should define your own model in your own way and then run a little tool which converts it into json OR

* another thing to consider, is that the model can be written in "json groovy" (http://pongasoft.github.io/glu/docs/latest/html/static-model.html#json-groovy-dsl) which is definitely the recommended way to write your model as it is a lot easier to read and write (and you can do iterations, conditions, etc...). But once uploaded in glu, the groovy part is gone, so when you download it from glu again, you only get the json. This is why it is recommended to store it outside of glu. With this pattern, you could imagine that your list of agents is stored on a server or whatnot and the json groovy simply loads it up.

Hope this helps
Yan



If you reply to this email, your message will be added to the discussion below:
http://glu.977617.n3.nabble.com/New-Agent-Automation-with-REST-API-tp4026646p4026649.html
To unsubscribe from New Agent Automation with REST API, click here.
NAML

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

bigmyx
In reply to this post by frenchyan
Yan,
One thing is currently confusing me is how to upload the model groovy DLS file.
I was searching for some examples, but didn't find any.
For a test, I am trying this command:

curl -v -u "admin:pass" -i --form docfile=@deploy_model.groovy "http://localhost:8080/console/rest/v1/cloudon_pre/model/static

But get 400 error.

Can you advise on that ?

Thanks a lot !


On Sun, Jun 29, 2014 at 11:57 PM, Mic Le <[hidden email]> wrote:
Yan,
This is very helpful.
Now I have a clear idea on which direction to follow.
I will research on the rest.
Thank you !

Sent from Above


On Sun, Jun 29, 2014 at 6:48 PM, frenchyan [via glu] <[hidden email]> wrote:

Hello

In regards to Fabric, the way you are planning to use them make sense. In glu, this is the highest level of grouping: one agent belongs to one and only one fabric and as a result a fabric is made up of all the agent belonging to it. This also means that a fabric is the highest level of orchestration: you can say "deploy all on one fabric" and glu will do it. So if 2 agents are not in the same fabric, this will require 2 separate commands. Also due to the fact that a fabric is tied to a ZooKeeper cluster, a fabric need to represent agents "close" to each other (cannot span data center for example).

There is no way and api call to modify parts of the model. It is an all or nothing. So to add an agent to it, you do need to modify the whole model and upload it again. So what you plan to do makes sense, that being said a couple of points:

* I am not too sure what you mean by "the code will run on the client" as I do not know what is client, but because of the logic (download, modify, upload), you could end up in a race condition  if 2 "clients" do it at the same time and one would win...

* you have to think where you want your source of truth to be. I would actually recommend to not be glu. I would actually suggest storing your model under version control: that way each client would check out the model, modify it and check it back in (at which time, you can always check for conflicts/make sure it has not been modified between checkout and checkin).

* there has been many threads on the topic (you can do a search), but in general it is not recommended to use the glu representation model as your own model: you should define your own model in your own way and then run a little tool which converts it into json OR

* another thing to consider, is that the model can be written in "json groovy" (http://pongasoft.github.io/glu/docs/latest/html/static-model.html#json-groovy-dsl) which is definitely the recommended way to write your model as it is a lot easier to read and write (and you can do iterations, conditions, etc...). But once uploaded in glu, the groovy part is gone, so when you download it from glu again, you only get the json. This is why it is recommended to store it outside of glu. With this pattern, you could imagine that your list of agents is stored on a server or whatnot and the json groovy simply loads it up.

Hope this helps
Yan



If you reply to this email, your message will be added to the discussion below:
http://glu.977617.n3.nabble.com/New-Agent-Automation-with-REST-API-tp4026646p4026649.html
To unsubscribe from New Agent Automation with REST API, click here.
NAML




--

--
Michael L.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

frenchyan
Administrator
There is an example of using curl in the documentation:


Note that if you are uploading json/groovy (which seems that it is what you are doing) then in example 2, you need to use: 

Content-Type: text/json+groovy

Hope this helps

Yan
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: New Agent Automation with REST API

bigmyx
Yan,
Thanks again !


On Mon, Jun 30, 2014 at 8:18 PM, frenchyan [via glu] <[hidden email]> wrote:
There is an example of using curl in the documentation:


Note that if you are uploading json/groovy (which seems that it is what you are doing) then in example 2, you need to use: 

Content-Type: text/json+groovy

Hope this helps

Yan



If you reply to this email, your message will be added to the discussion below:
http://glu.977617.n3.nabble.com/New-Agent-Automation-with-REST-API-tp4026646p4026652.html
To unsubscribe from New Agent Automation with REST API, click here.
NAML



--

--
Michael L.
Loading...