首页 > 系统相关 >CS3214 Customizable Shell

CS3214 Customizable Shell

时间:2024-10-01 20:35:32浏览次数:9  
标签:terminal process Shell group CS3214 will Customizable shell user

CS3214 F

Project 1 - “Customizable Shell”

Due Date: See website for due date (Late days may be used.)

This project must be done in groups of 2 students. Self-selected groups must have regis-tered using the grouper app (URL). Otherwise, a partner will be assigned to you.

1 Introduction

This assignment introduces you to the principles of process management and job control in a Unix-like operating system. In this project, you will develop a simple job control shell.

This is an open-ended assignment. In addition to implementing the required functional-ity, we encourage you to define the scope of this project yourself.

2 Base Functionality

A shell receives line-by-line input from a terminal that represents user commands. Some user commands are builtins, which are implemented by the shell itself. If the user inputs the name of such a built-in command, the shell will execute this command. Otherwise, the shell will interpret the input as containing the name of an external program to be executed, along with arguments that should be passed to it. In this case, the shell will fork a new child process and execute the program in the context of the child. Normally, the shell will wait for a command to complete before reading the next command from the user. However, if the user appends an ampersand ‘&’ to a command, the command is started and the shell will return to the prompt immediately. In this case, we refer to the running command as a “background job,” whereas commands the shell waits for before processing new input are called “foreground jobs.”

The shell provides job control. A user may interrupt foreground jobs, send foreground jobs into the background, and vice versa. Thus at a given point in time, a shell may run zero or more background jobs and zero or one foreground jobs. If there is a foreground job, the shell waits for it to complete before printing another prompt and reading the next command. In addition, the shell informs the user about status changes of the jobs it manages. For instance, jobs may exit, or terminate due to a signal, or be stopped for several reasons.

At a minimum, we expect that your shell has the ability to start foreground and back-ground jobs and implements the built-in commands ‘jobs,’ ‘fg,’ ‘bg,’ ‘kill,’ ’exit,’ and ‘stop.’ The semantics of these commands should match the semantics of the same-named commands in bash. The ability to correctly respond to ˆC (SIGINT) and ˆZ (SIGTSTP) is expected, as are informative messages about the status of the children managed. Like bash, you should use consecutively numbered small integers to enumerate your jobs.

For the minimum functionality, the shell need not support pipes (|), I/O redirection (< > >>), nor the ability to run programs that require exclusive access to the terminal (e.g., vim).

We expect most students to implement pipes, I/O redirection, and managing the con-trolling terminal to ensure that jobs that require exclusive access to the terminal obtain such access (see Section 3.3). Beyond that, cush’s customizability, described in Section 5, should allow for plenty of creative freedom.

3 Strategy

3.1 Handling SIGCHLD To Process Status Changes

At a given point in time, a user may have multiple jobs running, each executing arbitrary programs chosen by the user. Because the shell cannot and does not know what these programs do, it has to rely on a notification facility from the OS to be informed when these jobs encounter events the shell needs to know about. We refer to such events as “changing status,” where “status” means whether the job is running, has been stopped, has exited, or has been terminated with a signal (for instance, crashed).

This notification facility involves a protocol in which the OS kernel sends an asynchronous signal (SIGCHLD) to the shell, and in which the shell then follows up by executing a sys-tem call (a variant of wait(), specifically waitpid(), as shown in the provided starter code).

Thus, you will need to catch the SIGCHLD signal to learn about when the shell’s child processes change status. Since child processes execute concurrently with respect to the parent shell, and since the shell has no knowledge of what these processes are doing, it is impossible to predict when a child will exit (or terminate with a signal), and thus it is impossible to predict when this 代 写CS3214 Customizable Shell  signal will arrive. In the worst case, a child may have already terminated by the time the parent returns from fork()! You also should not make any assumptions about how a child process might change state: for instance, even if the user issues a kill built-in command to terminate a process, the processes might not immediately terminate (or may not terminate at all), so the shell should not assume that a status change occurred unless and until it has first-hand information from the OS that it did.

Because of the asynchronous nature of signal delivery, you will need to block handling of the signal in those sections of your code where you access data structures that are also needed by the handler that is executed when this signal arrives. For example, consider the data structure used to maintain the current set of jobs. A new job is added after a child process has been forked; a job may need to be removed when SIGCHLD is received. To avoid a situation where the job has not yet been added when SIGCHLD arrives, or - worse - a situation in which SIGCHLD arrives while the shell is adding the job, the parent should block SIGCHLD until after it completed adding the job to the list. If the SIGCHLD signal is delivered to the shell while the shell blocks this signal, it is marked pending and will be received as soon as the shell unblocks this signal.

Use the provided helper functions in signal support.c to block and unblock sig-nals, which in turn rely on sigprocmask(2). To set up signal handlers, they use the sigaction(2) system call with sa flags set to SA RESTART. The mask of blocked signals is inherited when fork() is called. Consequently, the child will need to unblock any signals the parent had blocked before calling exec().

3.2 Process Groups

User jobs may involve multiple processes. For instance, the command line input ls | grep filename requires that the shell start two processes, one to execute the ls and the other to execute the grep command. Aside from this example, child processes that a user program may start should usually be part of the same job so that the user can manage them as one unit. To help manage these scenarios, Unix introduced a way to group processes that makes it simpler for the shell and for the user to address them as one unit.

Each process in Unix is part of a group. Process groups are treated as an ensemble for the purpose of signal delivery and when waiting for processes. Specifically, the kill(2), killpg(2), and waitpid(2) system calls support the naming of process groups as possible targets. In this way, if a user wants to terminate or stop a job, it is possible for the shell to send a termination or stop signal to a process group that contains all processes that are part of this job. To facilitate this mechanism the shell must arrange for process groups to be created and for processes to be assigned to these groups.

Each process group has a designated leader, which is one of the processes in the group. To create a new group with itself as the leader, a process simply calls setpgid(0, 0). The process group id of a process group is equal to the process id of the leader. Child processes inherit the process group of their parent process initially. They can then form. their own group if desired, or their parent process can place them into a different process group via setpgid(). The shell must create a new process group for each job and make sure that all processes that will be created for this job become members of this group. Note that while the process group management facilities are available to all user programs, only shell programs will typically make use of them – for most other programs, the default behavior. of inheriting the parent’s process group is a desirable default.

In addition to signals and waitpid, process groups are used to manage access to the ter-minal, as described next.

3.3 Managing Access To The Terminal

Running multiple processes on the same terminal creates a sharing issue: if multiple pro-cesses attempt to read from the terminal, which process should receive the input? Sim-ilarly, some programs - such as vi - output to the terminal in a way that does not allow them to share the terminal with others.

To solve this problem, Unix introduced the concept of a foreground process group. The kernel maintains such a group for each terminal. If a process in a process group that is not the foreground process group attempts to perform. an operation that would require exclu-sive access to a terminal, it is sent a signal: SIGTTOU or SIGTTIN, depending on whether the use was for output or input. The default action taken in response to these signals is to suspend the processes in that group. If that happens, the processes’ parent (i.e., your shell) can learn about this status change by calling waitpid(). WIFSTOPPED(status) will be true in this case. To allow these processes to continue, their process group must be made the foreground process group of the controlling terminal via a call to tcsetpgrp(), and then the process group must be sent a SIGCONT signal. The shell will typically take this action in response to a ’fg’ command issued by the user.

Signals that are sent as a result of user input, such as SIGINT or SIGTSTP, are also sent to a terminal’s foreground process group. Note that this sending of signals occurs auto-matically by the operating system, it is not an action the shell takes. Delivering this signal to an entire process group makes it so that when a user hits Ctrl-c to terminate a job such as ls | grep filename both the process running ls and the process running grep will receive the SIGINT signal, informing them of the user’s desire to terminate them. To ensure that such signals are delivered to the correct process group, the shell must arrange for these process groups to exist and be populated with the correct processes, and it must inform. the OS kernel which process group the user intends to run in the foreground at a given point in time.

3.4 Managing The Terminal’s State

Many years ago, most Unix terminals were actual devices that had a console and a key-board and that were connected to the main computer with some kind of serial interface such as RS-232. To control those devices, the OS device drivers would need to control a set of input and output flags collectively known as the terminal state. In modern systems, the most commonly used terminal type is a pseudo-terminal (pty) connected to an ssh network connection, yet this model still exists. You can type stty -a to see what those flags are, though you probably won’t care about their details.

Some processes change the state of the terminal in a certain way. For instance, vim puts the terminal in so-called “raw” mode where it receives keystrokes as they are typed (as opposed to “cooked” which requires the user to end a line with the enter key before it is received by a program). So does bash and in fact, your shell, which uses the readline library, does this, too, while reading user input.

This raises a management issue when the user switches between the shell’s command line and foreground process jobs. For instance, a user may start vim, then use Ctrl-z to stop it, run some other job in the foreground, then stop it, resume vim, exit vim, and resume the second job.

In this case, it is necessary to restore the terminal state whenever the vim process is re-sumed to what it was before vim was stopped. Interestingly, it is possible for a process to perform. such restoration itself (in fact, vim does this by handling the SIGCONT signal).

However, if the shell performed such saving and restoration transparently, then any pro-gram that manipulates its terminal state could be run under a job control regime. Specif-ically, your shell should save the state of the terminal when a job process is suspended and restore it when the job is continued in the foreground by the user.

When the shell returns to the prompt, it must make itself the foreground process group of the terminal. In this case, it should also restore a known good terminal state. Your shell should sample this known good terminal state when it starts. You may find the functions provided in termstate management.c useful, which already handle most of the logic.

This known good state is also the state that the terminal will be in if a new job is started by the user. Therefore, programs that are agnostic with respect to the state of the terminal will continue to work. However, there has to be a way for the user to change the default terminal settings programs encounter when they are run (as well as the terminal settings that are in effect while the shell is being used by a user). The stty command exists for this purpose. When run, it will display and/or change existing settings to suit a user’s preferences.

The shell must respect changes made by stty and replace its known good terminal state with the state the terminal was put in by the stty command. To that end, the following convention is used: if any foreground job exits with a success (zero) exit status, the current terminal state will be sampled by the shell and becomes the new known good state (as per the user’s intent.) Your shell should do this sampling. Make sure not to sample the terminal state in these cases:

• A job exits that was not started as a foreground job.

• A job exits that is not a foreground job at the time of its exit.

• A job terminates with a signal.

• A job exits but the exit status code is nonzero.

For jobs that consists of multiple processes, consider the last process in the pipeline. You will note that this heuristics is not perfect – it will in fact sample any successfully exiting job’s terminal state rather than just where the user intends it – but this doesn’t pose a big problem in practice since the majority of programs doesn’t reprogram the terminal.

3.5 Pipes and I/O Redirection

A pipeline of commands is considered one job. All processes that form. part of a pipeline must thus be part of the same process group, as already discussed in Section 3.3. Note that all processes that are part of a pipeline are children of the shell, e.g., if a user runs a | b then the process executing b is not a child process of the process executing the program a.

To implement the pipes itself, use the pipe(2) system call, or alternatively the pipe2(2) GNU extension. The latter allows you to set flags on the returned file descriptors such as O_CLOEXEC. A pipe must be set up by the parent shell process before a child is forked. Forking a child will inherit the file descriptors that are part of the pipe. The child must then redirect its standard file descriptors to the pipe’s input or output end as needed using the dup2(2) system call. If the user used the |& instead of the | symbol, both standard output and standard error should be redirected to the pipe.

Although the parent shell process creates pipes for each pair of communicating children before they are forked, it will not itself write to the pipes or read from the pipes it creates. Therefore, you must make sure that the parent shell process closes the file descriptors referring to the pipe’s ends after each child was forked. This is necessary for two reasons: first, in order to avoid leaking file descriptors. Second, to ensure the proper behavior. of programs such as /bin/cat if the user asks the shell to execute them. To see why, we must first discuss what happens to file descriptors on fork(), close(), and exit().

Each file descriptor represents a reference to an underlying kernel object. fork() makes a shallow copy of these descriptors. After fork(), both the child and the parent process have access to any object the parent process may have created (i.e., open files or other kernel objects). Closing a file descriptor in the (parent) shell process affects only the cur rent process’s access to the underlying object. Hence when the parent shell closes the file descriptor referring to the pipe it created, the child processes will still be able to access the pipe’s ends, allowing it to communicate with the other commands in the pipeline.

The actual object (such as a pipe or file) is destroyed only when the last process that has at least one open file descriptor referring to the object closes the last file descriptor referring to it. If you failed to close the pipe’s file descriptors in the parent process (your shell), you compromise the correct functioning of programs that rely on taking action when their standard input stream signals the end of file condition. For instance, the /bin/cat program will exit if its standard input stream reaches EOF, which in the case of a pipe happens if and only if all descriptors pointing to the pipe’s output end are closed. So if cat’s standard input stream is connected to a pipe for which the shell still has an open file descriptor, cat will never “see” EOF for its standard input stream and appear stuck.

Lastly, note that when a process terminates for whatever reason, via exit() or via a sig-nal, all file descriptors it had open are closed by the kernel as if the process had called close() before terminating. This means that you do not need to worry about making sure that file descriptors you open for the shell’s child processes are closed after these child processes exit. However, since the shell is a long running program that does not exit between user commands, the shell must close its own copies of these file descrip-tors to avoid above-mentioned leakage. If it did not, it would eventually run out of file descriptors because the OS imposes a per-process limit on their number.

Although the processes that are part of pipeline typically interact with each other through the pipe that connects their standard streams, they are still independent processes. This means they can exit, or terminate abnormally, independently and separately. When your shell calls waitpid() to learn about these processes’ status changes, it will learn about each one separately. You will need to map the information you learn about one process to the job to which it belongs, using a suitable data structure you define in your shell implementation.

Here is a brief table summarizing facts about the status changes and the corresponding macros you can apply to the status (out) parameter returned by waitpid:

 

Event

How to check for it

Additional info

Process

stopped?

Process dead?

User stops fg pro- cess with Ctrl-Z

WIFSTOPPED

WSTOPSIG    equals SIGTSTP

yes

no

User stops process with stop (cush) or kill -STOP (bash)

WIFSTOPPED

WSTOPSIG    equals SIGSTOP

yes

no

non-foreground

process           wants terminal access

WIFSTOPPED

WSTOPSIG    equals SIGTTOU or SIGT- TIN

yes

no

process exits via exit()

WIFEXITED

WEXITSTATUS has return code

no

yes

user terminates pro- cess with Ctrl-C

WIFSIGNALED

WTERMSIG equals SIGINT

no

yes

user terminates pro- cess with kill

WIFSIGNALED

WTERMSIG equals SIGTERM

no

yes

user terminates pro- cess with kill -9

WIFSIGNALED

WTERMSIG equals SIGKILL

no

yes

process has been terminated  (general case)

WIFSIGNALED

WTERMSIG equals signal number

no

yes

 

 

Additional information can be found in the GNU C library manual, available at http://www.gnu.org/s/libc/manual/html_node/index.html. Read, in particular, the sections on Signal Handling and Job Control.

 

3.6 Use of posix spawn

In a 2019 paper published at the HotOS workshop, Baumann et al [1] criticized the use and teaching of the Unix style. of creating a new process by first creating a clone via fork(), then customizing the new process’s environment through actions the clone performs on itself before executing a new program. A key weakness of this approach is that it is incom-patible with multithreaded programs. They propose the use of an existing alternative API instead, i.e., posix spawn(3). This call combines fork() and exec() into one, and it also can be customized so that the child process will perform. the necessary operations to set up or join a process group and to redirect inherited file descriptors as desired.

However, posix spawn as defined by POSIX lacks one important feature, which is to provide the child process with ownership of its terminal. This action cannot be per-formed in the parent since doing so would create a race condition: the child may reach a point where it assumes it had terminal ownership before the parent assigns ownership to it. For this project, you have access to a version of posix spawn that includes a non-portable extension posix spawnattr tcsetpgrp np(posix spawnattr t *attr, int fd) that allows you to provide a file descriptor referring to the terminal for which the child process should acquire ownership.

For your implementation, you are encouraged to use posix spawn in lieu of fork + exec. If you choose to do so, your implementation will avoid the potential sources of bugs that the use of fork() introduces, such as inadvertently attempting to update par-ent data structures in the child process, and in general will exhibit to easier-to-understand control flow and memory access semantics. Control flow will be traditional and linear: posix spawn will be called once, and return once, like any ordinary function. It will spawn a new program in a new process as a side effect. This child process will never directly access data structures inherited from the parent, though it relies on inheriting open file descriptors like in the fork case. posix spawn also does not change the fact that the created process will immediately run concurrently with the parent process when it returns. In other words, you may think of it as a combination of fork and exec, not of fork, exec, and wait.

However, it is difficult to use posix spawn successfully if you do not understand how fork and exec interact with file descriptors and process groups, so the explanation in the preceding sections still applies and must be thoroughly understood. Everything related to job management applies equally as it is independent of the method used to start the child processes.

When using posix spawn, you must observe all of the following hints

• Use the posix spawnp variant to be able to find programs in the user’s path.

• Use posix spawn file actions adddup2 to wire up pipe file descriptors and handle the redirection of standard error.

• Use posix spawn file actions addopen to wire up I/O redirection from/to files.

• Use posix spawnattr setpgroup along with the POSIX SPAWN SETPGROUP flag to establish or join a new process group.

• Use posix spawnattr tcsetpgrp np along with the POSIX SPAWN TCSETPGROUP flag to give the child’s process group terminal ownership.

• Use posix_spawnattr_setflags to set the desired flags. You may include POSIX_SPAWN_USEVFORK to make use of the specialized (and slightly faster) vfork() system call. Note that you may call this function only once since later calls will replace the flags set in earlier ones. Thus, you need to bitwise combine all necessary flags into one value before calling it with this value.

• You do not need to perform. a setpgid() call in the parent since the race condition necessitating this call no longer exists: the call to spawn won’t return until after the child has been placed into its process group.

• You will need to pass the current environment as the last argument. Add an external declaration like so extern char **environ;.

• Lastly, note that the resulting code won’t necessarily be shorter (my version is 94 vs. 67 lines for the fork/exec variant), but very likely less confusing.

• The Makefile will set the correct include path and library flags to link with the re-quired version of posix spawnp, which overrides the version in the installed GNU C library. You will need to build the library first, use

(cd posix_spawn; make)

to that end.

4 Use of Git

You will use Git for managing your source code. Git is a distributed version control system in which every working directory contains a full repository, and thus the system can be used independently of a (centralized) repository server. Developers can commit changes to their local repository. However, in order to share their code with others, they must then push those commits to a remote repository. Your remote repository will be hosted on git.cs.vt.edu, which provides a facility to share this repository among group members. For further information on git in general you may browse the official Git documentation: http://git-scm.com/documentation, but feel free to ask ques-tions on the forum as well! The use of git (or any distributed source code control system) may be new to some students, but it is a prerequisite skill for most programming related internships or jobs.

You will use a departmental instance of Gitlab for this class. You can access the instance with your SLO credentials at https://git.cs.vt.edu/.

The provided base code for the project is available on Gitlab at https://git.cs.vt.edu/cs3214-staff/cs3214-cush,

One team member must fork this repository by viewing this page and clicking the fork link. This will create a new repository for you with a copy of the contents. From there you must view your repository settings, and set the visibility level to private. On the settings page you may also invite your other team member to the project so that they can view and contribute.

Group members may then make a local copy of the repository by issuing a git clone command. The repository reference can be found on the project page such as [email protected]:teammemberwhoclonedit/cs3214-cush.git To clone over SSH (which you may need to do on rlogin), you will have to add an SSH public key to your profile by visiting https://git.cs.vt.edu/-/user_settings/ssh_ keys. This key is separate from the key you added to your /.ssh/authorized keys file. Although you could use the same key pair you use to log into rlogin, we recommend using a separate key pair. This way you can avoid storing the private key you use to access rlogin on rlogin itself.

If updates or bug fixes to this code are required, they will be announced on the forum. You will be required to use version control for this project. When working in a team, both team member should have a roughly equal number of committed lines of code to show their respective contributions.

Please note. To facilitate the automated grading of your git usage, please follow the following rules:

• Do not rename the repo when you fork it.

• Do not create a git group; fork the repo under the namespace of one of the two group members.

• Make sure that, once you have finished, your final product will be on the master branch.

• Make sure that the git commit log on this branch shows the contributions of both team partners under their CS pid.

• You may use branches during development, but if you do, make sure to merge those branches. Don’t squash your commits when you do so.

• You must use git.cs.vt.edu and not any external git server.

4.1 Code Base

To build the provided code, run make in the src directory. (Don’t forget to build the posix_spawn library first.)

The code contains a command line parser that implements the following grammar:

cmd_line   :  cmd_list

 

cmd_list   :

|   pipeline

|   cmd_list  ’;’ 

|   cmd_list  ’&’

|   cmd_list  ’;’  pipeline 

|   cmd_list  ’&’  pipeline

pipeline   :  command

|   pipeline  ’ | ’  command   

|   pipeline  ’ | &’  command

command   :  WORD

|   input

|   output

|    command  WORD

|   command  input   

|   command  output

input   :  ’<’ WORD

output   :  ’>’  WORD   |   ’>>’  WORD |   ’>&’  WORD

 

Look at the provided cush.c main function to see how to invoke the parser. If a command line is semantically correct, the parser code will create a ast command line data struc-ture, which refers to a list of ast pipeline structures. Each ast pipeline is used to create a job. It may consist of one or more individual commands that form. a pipeline. Each command is represented as a ast command structure. Study the definitions of these structures.

By default, the provided code will read a line, parse it, and dump the parsed command line to stdout.

The files signal support.c and termstate management.c contain a number of util-ity functions for dealing with signals and managing the terminal state, which do most of the heavy lifting for you. We strongly recommend you use these functions rather than directly calling the functions described in the textbook.

5 Builtins

The basic builtins our tests expect include kill, fg, bg, jobs, stop, exit.

In addition, you should implement at least 2 builtin commands or a functionality exten-sion, a simple one and a more complex one. Ideas for simple builtins include:

• A custom prompt (e.g. outputting hostname and current directory)

• Setting and unsetting environment variables

• cd to change the current directory (should support changing to the home directory when invoked with just cd)

• Other simple commands

Ideas for “more complex” builtins include

• A user-customizable prompt (e.g. like bash’s PS1) that provides a means for the user to set the prompt. Implement a substantial subset of ‘PS1‘’s prompt escape sequences, see here.

• Command-line history (perhaps using’s GNU History library) (this should include the features commonly provided by GNU history, such as event designators. If GNU history is properly integrated, these will come for free.

• Glob expansion (e.g., *.c). You may use GNU’s glob library, see glob(3).

• Support for aliases (definition and expansion)

• Shell variables

• Timing commands: ”time” or a builtin version time-outs.

• A directory stack maintained via pushd, popd, etc.

• Backquote substitution

• Smart command-line completion, i.e., help with mistyped commands

• Embedding applications: scripting languages, web servers, etc.

Generally, we expect for more complex builtins to add significant value for the user.

A side-note on Unix philosophy - in general, Unix implements functionality using many small programs and utilities. As such, built-in commands are often only those that must be implemented within the shell, such as cd. In addition, essential commands such as ’kill’ are often built-in to make sure an operator can execute those commands even if no new processes can be forked. Your builtins should generally stay with this philosophy and implement only functionality that is not already available using Unix commands or that would be better implemented using separate programs. If in doubt, ask.

 

标签:terminal,process,Shell,group,CS3214,will,Customizable,shell,user
From: https://www.cnblogs.com/comp9021/p/18443208

相关文章

  • shell脚本常用命令
    常用命令2.1查看脚本执行过程2.2查看脚本是否有语法错误2.3date命令2.3.1显示年、月、日date+%Y-%m-%d   #年(以四位数字格式打印年份)月日date+%y-%m-%d   #年(以两位数字格式打印年份)月日date+%T         #年(以四位数字格式打印年份)月日2.3.2......
  • Shell脚本基础知识-初步版
    本文是笔者研究生期间在阅读《Linux命令行与shell脚本编程大全》之后总结出来的一些重点知识的记录,在此重新整理输出。以便在给上个帖子涉及到的相关知识点进行一下讲解,帮助自己复习shell脚本的首行规范化应该是#!/bin/bash#functiondescription其中第一行必须如此,#后......
  • shc加密shell脚本总结
    shc介绍shc是shell编译器(ShellCompiler)的缩写,它可以对shell脚本进行编译和加密。它能够将shell脚本编译为可执行的二进制文件,其中包含了脚本的功能和逻辑,而不暴露源代码。可以说shc就是一个加密shell脚本的工具。shc的官方网址为:http://www.datsi.fi.upm.es/~frosal/sources/......
  • shell编程五
    10.循环10.1循环概述循环类型说明for循环最常用的循环,2种格式while循环当型循环while可以加入条件,死循环,读取文件dountil循环直到循环极少用10.2for循环10.2.1最常用的for循环格式#最常用的一种for变量in候补清单(列表)do命令doneforn......
  • shell脚本——检索mysql数据库中得用户,如果没有就创建
     #!/bin/bash#author:goujinyangset-eUSER1=mysqlsiUSER2=dbqueryUSER3=dboperUSER4=yyzcUSERS=($USER1$USER2$USER3$USER4)USER_PASS=123123#MySQL用户名和密码MYSQL_USER="root"MYSQL_PASSWORD="Root#123"#MYSQL_HOST="local......
  • 《掌握Shell脚本:从入门到精通的实用指南》
    目录引言一、Shell变量——数组二、表达式与运算符——表达式(一)算术表达式(二)逻辑表达式(三)算术运算符(四)整数关系运算符(五)字符串检测运算符(六)运算符三、流程控制语句——多命令组合1、基本if语句2、if...else语句3、if...elif...else语句4、使用逻辑运算符&&和......
  • 初学者必看:Shell 编程入门与应用概述
     目录 引言一、Shell概述——什么是shell?二、Shell概述——shell功能三、Shell概述——命令解释四、Shell概述——程序执行1、创建shell文件2、运行Shell脚本有两种方法:①作为可执行程序②作为解释器参数五、Shell概述——输入输出重定向1、输出重定向(>)2、输......
  • shell 变量里的变量
    前言全局说明变量里的变量。一、说明环境:Ubuntu18.04.6LTS(Linuxqt-vm5.4.0-150-generic#167~18.04.1-UbuntuSMPWedMay2400:51:42UTC2023x86_64x86_64x86_64GNU/Linux)pythonPython2.7.17(default,Mar82023,18:40:28)[GCC7.5.0]onlinux2pytho......
  • golang shell
    packageshellimport( "context" "fmt" "os/exec" "time")//自定义输出结构体typecustomOutputstruct{ outPutchanstring resetCtxchanstruct{}}//Write将输出写入到customOutput结构体中,并通知重置超时。func(ccustomOutput......
  • rust运行shell命令并获取输出
    usestd::io::{BufReader,BufRead};usestd::process::{self,Command,Stdio};fnmain(){ls();ps();xargs();time();iostat();}//不带参数命令fnls(){letoutput=Command::new("ls").output().expect("执行异常,提示");......