There are several types of testing you will commonly hear referred to in addition to function and unit testing. In some cases they differ in scope (how much of the system is under test) or in aim (performance versus correctness), so these might seem to be a bit of a grab bag. The key here is familiarity with commonly used terms and to understand that there are many different, sometimes competing, concerns in the development of a testing strategy.
Acceptance tests can take any form, depending on circumstance. The purpose of an acceptance test is to verify that the system under test is "ready" for the next step in its life. This might mean moving from one development team to another, moving from "alpha" to "beta", or delivery to a particular customer.
An integration test verifies that two or more separate components of a software system work correctly together. This kind of testing can be used in two different situations:
Let's say that we've written a web app in PHP, a common web scripting language. It is very common to deploy PHP applications using a separate web server like Apache or Nginx. The web server receives requests from the network, then runs the correct PHP code for requests that match rules defined by the developer or system administrator.
It is also common for web applications in general to rely on a database such as MySQL. During development, however, developers often choose to use a simpler database system such as SQLite that allows them to iterate more quickly (make a change, manually test the change).
An end-to-end test is, at its most basic, an integration that includes an entire "trip" through the system, from input to output, and whatever comes in between.
A regression test can be any kind of test that attempts to prevent a bug from being re-introduced after it has been fixed. This situation is known as a "regression", because the state of the program regresses, progress is lost.
For example, say we have a function that computes the largest number in a list of signed integers. It might have a signature like the one here:
int largest([]int candidates);
Furthermore, we have some test cases. You might see something wrong with our test cases right away, but if not, hold tight.
[]int case0 = []int{1, 2, 3};
assert largest(case0) == 3;
[]int case1 = []int{3, 2, 1};
assert largest(case1) == 3;
We get a bug report from someone using our code. Occasionally our function throws an exception. After diving in, we discover that our function ignores negative numbers! We fix our code, and then add a third test case to our test suite for this function:
// Regression test: negative numbers
[]int case2 = []int{-1, -2, -3};
assert largest(case2) == -1;
What we have above is a regression test. You can't tell (although it might be prudent to note that this test resulted from a bug report, just to make sure no one removes it), but that's what it is. This kind of defect-driven testing can help give us confidence in our test suite and prevent old bugs from coming back when someone refactors the code to improve performance or add a new feature.
A smoke test is generally a very simple, high-level test that attempts to ensure that nothing is fundamentally wrong with the system under test. Smoke tests are often used to verify that the system is ready to have its full test suite run. Smoke tests are also used during development on prototypes which, due to their disposable nature, do not warrant a full test suite.