Hey Rick!! Thanks for the reply... That was kind of going to be my thinking.. I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH. As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances. The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.) I'm currently looking at information on this, as well as a few others. My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down.. Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs. Whew!! Thoughts/Comments etc.. On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens <ricks@xxxxxxxxxxxxxx> wrote: > On 11/08/2016 04:02 AM, bruce wrote: >> Hi. >> >> Trying to get my head around what should be basic/trivial process. >> >> I've got a remote VM. I can fire up a local term, and then ssh into >> the remote VM with no prob. I can then run the remote functions, all >> is good. >> >> However, I'd really like to have some process on the local side, that >> would allow me to do all the above in a shell/prog process on the >> local side, >> >> Psuedo Processes:: >> -spin up the remoter term of user1@1.2.3.4 >> -track the remote term/session - so I could "log into it" see what's >> going on for the initiated processes" >> -perform some dir functions as user1 on the remote system >> -run appA as user1 on 1.2.3.4 (long running) >> -run appB as user1 on 1.2.3.4 (long running) >> -etc.. >> -when the apps/processes are finished, shut down the "remote term" >> >> I'd prefer to be able to do all of this, without actually having the >> "physical" local term be generated/displayed in the local desktop. >> >> I'm going to be running a bunch of long running apps on the cloud, so >> I'm trying to walk through the appropriate process/approach to >> handling this. >> >> Sites/Articles/thoughts are more than welcome. > > 1. Set up ssh keys so you don't need to use passwords between the > two systems (no interaction). > > 2. Launch your tasks on the remote VM using screen over ssh by doing > something like: > > ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish to > run on the VM" > ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you > wish to run on the VM" > > 3. If you want to check on the tasks, log into the VM via ssh > interactively and check the various screen sessions. I recommend setting > the screen session names via the "-S" option so they're easier to > differentiate. > > That should do it. The items in step 2 could be done in a shell script > if you're lazy like me. :-) > ---------------------------------------------------------------------- > - Rick Stevens, Systems Engineer, AllDigital ricks@xxxxxxxxxxxxxx - > - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - > - - > - The trouble with troubleshooting is that trouble sometimes - > - shoots back. - > ---------------------------------------------------------------------- > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx