Nonlinear integral equations

Nonlinear integral equations are a vital area of study in mathematics, offering applications in diverse fields such as physics, engineering, and biology. These equations, characterized by the presence of nonlinear terms involving the unknown function, are inherently more complex than their linear counterparts. Despite these challenges, recent advancements in analytical and numerical methods have made it possible to solve certain classes of nonlinear integral equations, though not all.

Unlike linear integral equations, the solutions to nonlinear integral equations are often not unique. However, under specific conditions, the existence of a unique solution can be ensured. Furthermore, there is a profound connection between integral equations and differential equations, as many nonlinear integral equations can be derived from differential equations. This relationship not only highlights their significance but also guides the development of solution techniques.

In general, nonlinear integral equations can take the form of Volterra or Fredholm integral equations, defined as follows:

  1. Nonlinear Volterra Integral Equation:

    u(x)=f(x)+λ0xK(x,t)F(u(t))dtu(x) = f(x) + \lambda \int_0^x K(x, t) F(u(t)) \, dt
  2. Nonlinear Fredholm Integral Equation:

    u(x)=f(x)+λabK(x,t)F(u(t))dtu(x) = f(x) + \lambda \int_a^b K(x, t) F(u(t)) \, dt

Here, u(x)u(x) is the unknown function, f(x)f(x) is a known function, K(x,t)K(x, t) is the kernel, and F(u(t))F(u(t)) is a nonlinear function. If F(u(t))=un(t)F(u(t)) = u^n(t) for n2n \geq 2, the equation is nonlinear. Conversely, when F(u(t))=u(t)F(u(t)) = u(t), the equation reduces to a linear form.

For instance:

  • A Volterra integral equation: u(x)=x+λ0x(xt)u2(t)dtu(x) = x + \lambda \int_0^x (x – t) u^2(t) \, dt
  • A Fredholm integral equation: u(x)=x+λ01cos(x)u3(t)dtu(x) = x + \lambda \int_0^1 \cos(x) u^3(t) \, dt

These examples demonstrate the nonlinear nature of the equations, where the dependence on u(t)u(t) introduces additional complexities in finding solutions.

The method of successive approximations

The method of successive approximations, also known as the iterative method or Picard’s method (in the context of differential equations), is a mathematical technique used to approximate solutions to equations or problems where finding an exact solution is challenging. This method involves iteratively refining guesses or estimates of the solution to converge to the correct answer. It is widely used in solving equations, fixed-point problems, and certain types of differential equations.

Core Idea

The method relies on the idea of generating a sequence of approximations (x0,x1,x2,)(x_0, x_1, x_2, \dots) such that:

xn+1=g(xn),x_{n+1} = g(x_n),

where g(x)g(x) is a function that maps a previous guess xnx_n to the next guess xn+1x_{n+1}.

Under certain conditions (e.g., if g(x)g(x) is a contraction mapping), the sequence will converge to a fixed point xx^*, which is a solution to the equation g(x)=xg(x) = x.

Mathematical Concepts Involved

  1. Fixed-Point Theory:

    • A fixed point of a function g(x)g(x) is a value xx^* such that g(x)=xg(x^*) = x^*.
    • The method of successive approximations is a tool to find these fixed points.
  2. Contraction Mapping:

    • A function g(x)g(x) is a contraction on a domain DD if there exists a constant LL, with 0L<10 \leq L < 1, such that: g(x)g(y)Lxyx,yD.|g(x) – g(y)| \leq L |x – y| \quad \forall x, y \in D.
    • If g(x)g(x) is a contraction, the Banach fixed-point theorem guarantees that the sequence (xn)(x_n) converges to a unique fixed point.
  3. Initial Approximation:

    • The method starts with an initial guess x0x_0. The choice of x0x_0 can affect the speed of convergence but not the final solution if the method converges.
  4. Error Analysis:

    • The error at the nn-th step can often be bounded as: xnxLn1Lx1x0,|x_n – x^*| \leq \frac{L^n}{1 – L} |x_1 – x_0|, indicating exponential decay of the error when L<1L < 1.
  5. Iterative Form:

    • For specific types of problems, like differential equations, the method can be expressed as: yn+1(x)=y0(x)+x0xf(t,yn(t))dt,y_{n+1}(x) = y_0(x) + \int_{x_0}^x f(t, y_n(t)) \, dt, where yn(x)y_n(x) is the nn-th approximation.

Applications

  1. Solving Nonlinear Equations: For f(x)=0f(x) = 0, rewrite it as x=g(x)x = g(x), then apply the iterative formula:

    xn+1=g(xn).x_{n+1} = g(x_n).
  2. Differential Equations (Picard’s Iteration): Solve initial value problems of the form:

    y(x)=f(x,y),y(x0)=y0.y'(x) = f(x, y), \quad y(x_0) = y_0.
  3. Numerical Methods: Used in numerical root-finding techniques like the Newton-Raphson method.

Examples

1. Solving a Nonlinear Equation

Solve x22=0x^2 – 2 = 0 using successive approximations.

  1. Rewrite as x=g(x)x = g(x), e.g., g(x)=2xg(x) = \frac{2}{x}.
  2. Start with an initial guess x0=1x_0 = 1.
  3. Iteratively compute: xn+1=2xn.x_{n+1} = \frac{2}{x_n}. Example iterations:
    • x1=21=2x_1 = \frac{2}{1} = 2,
    • x2=22=1x_2 = \frac{2}{2} = 1,
    • x3=21=2x_3 = \frac{2}{1} = 2.

Here, the sequence oscillates. A better function g(x)g(x) (e.g., g(x)=x+2/x2g(x) = \frac{x + 2/x}{2}) would ensure convergence.

2. Solving a Differential Equation (Picard’s Method)

Solve y(x)=x+yy'(x) = x + y, y(0)=1y(0) = 1.

  1. Start with y0(x)=1y_0(x) = 1.
  2. Define the iterative formula: yn+1(x)=1+0x(t+yn(t))dt.y_{n+1}(x) = 1 + \int_0^x (t + y_n(t)) \, dt.
  3. Compute:
    • y1(x)=1+0x(t+1)dt=1+[t22+t]0x=1+x22+xy_1(x) = 1 + \int_0^x (t + 1) \, dt = 1 + \left[\frac{t^2}{2} + t\right]_0^x = 1 + \frac{x^2}{2} + x,
    • y2(x)=1+0x(t+t22+t+1)dt=1+0x(t22+2t+1)dt=1+x36+x2+xy_2(x) = 1 + \int_0^x \left(t + \frac{t^2}{2} + t + 1\right) \, dt = 1 + \int_0^x \left(\frac{t^2}{2} + 2t + 1\right) dt = 1 + \frac{x^3}{6} + x^2 + x.

The sequence converges to the exact solution y(x)=exx1y(x) = e^x – x – 1.

Advantages

  1. Simple to implement.
  2. Provides successive refinements for greater accuracy.
  3. Can handle nonlinear problems effectively.

Limitations

  1. Requires careful selection of the function g(x)g(x) for convergence.
  2. Convergence can be slow for some functions.
  3. May diverge if the contraction condition is not satisfied.

Picard’s method of successive approximations

Picard’s method is an iterative technique used to solve initial value problems for differential equations. The process involves converting the differential equation into an equivalent integral equation, then iteratively refining the solution by substituting the approximation from the previous step into the integral equation.

General Formulation

The first-order initial value problem:

dudx=f(x,u(x)),u(a)=b,\frac{du}{dx} = f(x, u(x)), \quad u(a) = b,

can be transformed into an equivalent integral equation:

u(x)=b+axf(t,u(t))dt.u(x) = b + \int_a^x f(t, u(t)) \, dt.

Picard’s method starts with an initial approximation, u0(x)=bu_0(x) = b, and generates successive approximations:

un+1(x)=b+axf(t,un(t))dt,u_{n+1}(x) = b + \int_a^x f(t, u_n(t)) \, dt,

where n=0,1,2,n = 0, 1, 2, \dots.

Under suitable conditions (e.g., f(x,u)f(x, u) is continuous and satisfies a Lipschitz condition), the sequence of approximations {un(x)}\{u_n(x)\} converges to the unique solution u(x)u(x).

Example 4.1

Problem

Solve the initial value problem:

dudx=x+u2,u(0)=0.\frac{du}{dx} = x + u^2, \quad u(0) = 0.

Solution

Transform into the integral equation:

u(x)=0x(t+u2(t))dt.u(x) = \int_0^x (t + u^2(t)) \, dt.
  • Zeroth Approximation:

    u0(x)=0.u_0(x) = 0.
  • First Approximation: Substitute u0(x)=0u_0(x) = 0 into the integral equation:

    u1(x)=0xtdt=x22.u_1(x) = \int_0^x t \, dt = \frac{x^2}{2}.
  • Second Approximation: Substitute u1(x)=x22u_1(x) = \frac{x^2}{2} into the integral equation:

    u2(x)=0x(t+(t22)2)dt=x22+x520.u_2(x) = \int_0^x \left(t + \left(\frac{t^2}{2}\right)^2\right) \, dt = \frac{x^2}{2} + \frac{x^5}{20}.
  • Third Approximation: Substitute u2(x)=x22+x520u_2(x) = \frac{x^2}{2} + \frac{x^5}{20} into the integral equation:

    u3(x)=0x(t+(t22+t520)2)dt.u_3(x) = \int_0^x \left(t + \left(\frac{t^2}{2} + \frac{t^5}{20}\right)^2\right) \, dt.

    After expanding and integrating:

    u3(x)=x22+x520+x8160+x114400.u_3(x) = \frac{x^2}{2} + \frac{x^5}{20} + \frac{x^8}{160} + \frac{x^{11}}{4400}.

Continuing this process, the solution takes the form of a power series:

u(x)=x22+x520+x8160+.u(x) = \frac{x^2}{2} + \frac{x^5}{20} + \frac{x^8}{160} + \dots.

Example 4.2

Problem

Solve the coupled system:

dudx=v,dvdx=x3(u+v),\frac{du}{dx} = v, \quad \frac{dv}{dx} = x^3(u + v),

with initial conditions u(0)=1u(0) = 1 and v(0)=12v(0) = \frac{1}{2}.

Solution

Transform into integral equations:

u(x)=1+0xv(t)dt,u(x) = 1 + \int_0^x v(t) \, dt, v(x)=12+0xt3(u(t)+v(t))dt.v(x) = \frac{1}{2} + \int_0^x t^3(u(t) + v(t)) \, dt.
  • First Approximation: Assume u0(x)=1u_0(x) = 1 and v0(x)=12v_0(x) = \frac{1}{2}. Substitute:

    u1(x)=1+0x12dt=1+x2,u_1(x) = 1 + \int_0^x \frac{1}{2} \, dt = 1 + \frac{x}{2}, v1(x)=12+0xt3(1+12)dt=12+3x48.v_1(x) = \frac{1}{2} + \int_0^x t^3 \left(1 + \frac{1}{2}\right) \, dt = \frac{1}{2} + \frac{3x^4}{8}.
  • Second Approximation: Substitute u1(x)u_1(x) and v1(x)v_1(x):

    u2(x)=1+0x(12+3t48)dt=1+x2+3x540,u_2(x) = 1 + \int_0^x \left(\frac{1}{2} + \frac{3t^4}{8}\right) \, dt = 1 + \frac{x}{2} + \frac{3x^5}{40}, v2(x)=12+0xt3(1+t2+3t540+3t48)dt.v_2(x) = \frac{1}{2} + \int_0^x t^3 \left(1 + \frac{t}{2} + \frac{3t^5}{40} + \frac{3t^4}{8}\right) \, dt.

Example 4.3

Problem

Solve the second-order nonlinear differential equation:

u(x)u(x)=(u(x))2,u(x)u”(x) = (u'(x))^2,

with u(0)=1u(0) = 1 and u(0)=1u'(0) = 1.

Solution

Convert to a system:

u(x)=v(x),v(x)=v2(x)u(x).u'(x) = v(x), \quad v'(x) = \frac{v^2(x)}{u(x)}.

Integral equations:

u(x)=1+0xv(t)dt,v(x)=1+0xv2(t)u(t)dt.u(x) = 1 + \int_0^x v(t) \, dt, \quad v(x) = 1 + \int_0^x \frac{v^2(t)}{u(t)} \, dt.
  • First Approximation:

    u1(x)=1+0x1dt=1+x,v1(x)=1+0x1dt=1+x.u_1(x) = 1 + \int_0^x 1 \, dt = 1 + x, \quad v_1(x) = 1 + \int_0^x 1 \, dt = 1 + x.
  • Second Approximation: Substitute u1(x)u_1(x) and v1(x)v_1(x):

    u2(x)=1+0x(1+t)dt=1+x+x22,u_2(x) = 1 + \int_0^x (1 + t) \, dt = 1 + x + \frac{x^2}{2}, v2(x)=1+0x(1+t)21+tdt=1+x+x22.v_2(x) = 1 + \int_0^x \frac{(1 + t)^2}{1 + t} \, dt = 1 + x + \frac{x^2}{2}.

Continuing, the solution converges to:

u(x)=ex.u(x) = e^x.

Existence theorem of Picard’s method

The existence theorem of Picard’s method provides a rigorous framework to establish that the solutions obtained through successive approximations indeed converge to a limit, and this limit is a valid solution to a given first-order differential equation. The method also defines the sufficient conditions under which the existence of the solution is guaranteed.

The Problem Statement

We consider the first-order ordinary differential equation (ODE):

dudx=f(x,u),u=b when x=a.\frac{du}{dx} = f(x, u), \quad u = b \ \text{when} \ x = a.

Here:

  • f(x,u)f(x, u) is a continuous function of xx and uu over a specified domain.
  • The goal is to determine whether a solution u(x)u(x) exists and can be constructed via successive approximations.

Successive Approximations

Picard’s iterative scheme for approximating the solution is defined as follows:

u1(x)=b+axf(t,b)dt,u_1(x) = b + \int_a^x f(t, b) \, dt, u2(x)=b+axf(t,u1(t))dt,u_2(x) = b + \int_a^x f(t, u_1(t)) \, dt, u3(x)=b+axf(t,u2(t))dt,u_3(x) = b + \int_a^x f(t, u_2(t)) \, dt, \cdots un+1(x)=b+axf(t,un(t))dt.u_{n+1}(x) = b + \int_a^x f(t, u_n(t)) \, dt.

Each un(x)u_n(x) is constructed iteratively using the previous approximation. The series of approximations u1,u2,u_1, u_2, \dots are expected to converge to a function U(x)U(x), which will be the solution of the differential equation.

Sufficient Conditions for Convergence

To ensure that the method converges to a unique solution U(x)U(x), the function f(x,u)f(x, u) must satisfy the following conditions:

  1. Boundedness Condition:

    f(x,u)M,|f(x, u)| \leq M,

    for all xx in [ah,a+h][a – h, a + h] and uu in [bk,b+k][b – k, b + k]. Here, MM is a positive constant.

  2. Lipschitz Condition:

    f(x,u)f(x,v)Auv,|f(x, u) – f(x, v)| \leq A |u – v|,

    for any u,vu, v in the specified range, where AA is a positive constant.

Proof of Convergence

Step 1: Boundedness of un(x)u_n(x)

From the definition of u1(x)u_1(x):

u1(x)b=axf(t,b)dtaxf(t,b)dtMxaMh.|u_1(x) – b| = \left| \int_a^x f(t, b) \, dt \right| \leq \int_a^x |f(t, b)| \, dt \leq M |x – a| \leq Mh.

Similarly, for u2(x)u_2(x), we have:

u2(x)u1(x)=ax(f(t,u1(t))f(t,b))dtaxAu1(t)bdt.|u_2(x) – u_1(x)| = \left| \int_a^x \big(f(t, u_1(t)) – f(t, b)\big) \, dt \right| \leq \int_a^x A |u_1(t) – b| \, dt.

Using the bound on u1u_1:

u2(x)u1(x)AMh2/2.|u_2(x) – u_1(x)| \leq A M h^2 / 2.

Continuing this process for u3,u4,u_3, u_4, \dots, we derive:

un+1(x)un(x)MAn1hnn!.|u_{n+1}(x) – u_n(x)| \leq \frac{M A^{n-1} h^n}{n!}.

Step 2: Convergence of the Series

The total difference between the approximations and the limit U(x)U(x) is bounded by the geometric series:

n=1un+1unMn=1(Ah)nn!.\sum_{n=1}^\infty |u_{n+1} – u_n| \leq M \sum_{n=1}^\infty \frac{(A h)^n}{n!}.

This series converges because the exponential series eAh=n=0(Ah)nn!e^{Ah} = \sum_{n=0}^\infty \frac{(Ah)^n}{n!} is finite for all AhAh. Hence, un(x)u_n(x) converges uniformly to a function U(x)U(x).

Verification of the Solution

To prove that U(x)U(x) satisfies the differential equation, we show that:

limnaxf(t,un1(t))dt=axf(t,U(t))dt.\lim_{n \to \infty} \int_a^x f(t, u_{n-1}(t)) \, dt = \int_a^x f(t, U(t)) \, dt.

This follows from the uniform convergence of un(x)u_n(x) to U(x)U(x) and the continuity of f(x,u)f(x, u).

Finally, taking the derivative of U(x)U(x):

dUdx=f(x,U(x)),U(a)=b.\frac{dU}{dx} = f(x, U(x)), \quad U(a) = b.

Thus, U(x)U(x) satisfies the original differential equation.

Example

Consider the differential equation:

dudx=x+u2,u(0)=0.\frac{du}{dx} = x + u^2, \quad u(0) = 0.

Using Picard’s method:

  1. u1(x)=0xtdt=x22u_1(x) = \int_0^x t \, dt = \frac{x^2}{2},
  2. u2(x)=x22+0x(t+(t22)2)dt=x22+x520u_2(x) = \frac{x^2}{2} + \int_0^x \left(t + \left(\frac{t^2}{2}\right)^2 \right) dt = \frac{x^2}{2} + \frac{x^5}{20},
  3. u3(x)=x22+x520+x8160u_3(x) = \frac{x^2}{2} + \frac{x^5}{20} + \frac{x^8}{160}, and so on.

These approximations converge to the exact solution for small xx.

The Adomian decomposition method

The Adomian Decomposition Method (ADM) is a mathematical technique used to solve a wide variety of linear and nonlinear differential equations, integral equations, and systems of equations. This method works by decomposing both the solution and nonlinear terms into series expansions. Unlike traditional methods, ADM does not rely on discretization, perturbation, or linearization, preserving the problem’s original structure.

1. Concept of Adomian Decomposition Method

ADM reformulates the problem to split it into manageable parts. For a general differential equation:

Lu+Ru+Nu=g(x),Lu + Ru + Nu = g(x),
  • LL: Linear operator that is easy to invert.
  • RR: Linear operator, not necessarily invertible.
  • NN: Nonlinear operator.
  • g(x)g(x): Source or forcing function.

The solution u(x)u(x) is expressed as a series:

u(x)=n=0un(x),u(x) = \sum_{n=0}^\infty u_n(x),

where un(x)u_n(x) are terms computed iteratively.

The nonlinear term N(u)N(u) is expanded using Adomian polynomials:

N(u)=n=0An,N(u) = \sum_{n=0}^\infty A_n,

where AnA_n are polynomials derived systematically from the terms of u(x)u(x).

2. Steps of Adomian Decomposition Method

Step 1: Rewrite the equation
Rearrange the differential equation into:

Lu=g(x)RuNu.Lu = g(x) – Ru – Nu.

Step 2: Apply the inverse operator
Apply L1L^{-1}, the inverse of the linear operator, to isolate uu:

u=L1(g)L1(Ru)L1(Nu).u = L^{-1}(g) – L^{-1}(Ru) – L^{-1}(Nu).

Step 3: Decompose the solution
Expand uu as a series:

u=n=0un.u = \sum_{n=0}^\infty u_n.

Step 4: Decompose nonlinear terms
Express N(u)N(u) as a series of Adomian polynomials:

N(u)=n=0An.N(u) = \sum_{n=0}^\infty A_n.

Step 5: Solve iteratively
Substitute the expansions into the equation and compute each unu_n term step-by-step.

3. Adomian Polynomials

For a nonlinear term N(u)N(u), the Adomian polynomials AnA_n are computed as:

An=1n!dndλn[N(k=0λkuk)]λ=0.A_n = \frac{1}{n!} \frac{d^n}{d\lambda^n} \left[N\left(\sum_{k=0}^\infty \lambda^k u_k\right)\right]_{\lambda=0}.

For example, if N(u)=u2N(u) = u^2, the polynomials are:

A0=u02,A1=2u0u1,A2=u12+2u0u2,and so on.A_0 = u_0^2, \quad A_1 = 2u_0u_1, \quad A_2 = u_1^2 + 2u_0u_2, \quad \text{and so on.}

4. Example Problem

Problem: Solve the nonlinear first-order differential equation:

u(x)+u2(x)=x,u(0)=0.u'(x) + u^2(x) = x, \quad u(0) = 0.

Solution:

Step 1: Rewrite the equation

u(x)=xu2(x).u'(x) = x – u^2(x).

Step 2: Identify operators
Here, L=ddxL = \frac{d}{dx}, and its inverse L1L^{-1} is the integration operator:

L1(f)=fdx.L^{-1}(f) = \int f \, dx.

The equation becomes:

u=xdxu2dx.u = \int x \, dx – \int u^2 \, dx.

Step 3: Decompose the solution
Assume u(x)=n=0un(x)u(x) = \sum_{n=0}^\infty u_n(x).

Step 4: Decompose the nonlinear term
For N(u)=u2N(u) = u^2, the Adomian polynomials are:

A0=u02,A1=2u0u1,A2=u12+2u0u2,A_0 = u_0^2, \quad A_1 = 2u_0u_1, \quad A_2 = u_1^2 + 2u_0u_2, \dots

Step 5: Compute iteratively
Start with the initial term:

u0=xdx=x22.u_0 = \int x \, dx = \frac{x^2}{2}.

Next, compute corrections:

  1. First correction:

    u1=u02dx=(x22)2dx=x44dx=x520.u_1 = -\int u_0^2 \, dx = -\int \left(\frac{x^2}{2}\right)^2 dx = -\int \frac{x^4}{4} dx = -\frac{x^5}{20}.
  2. Second correction:

    u2=(2u0u1)dx=2(x22)(x520)dx=x8160.u_2 = -\int (2u_0u_1) \, dx = -\int 2\left(\frac{x^2}{2}\right)\left(-\frac{x^5}{20}\right) dx = \frac{x^8}{160}.

Continue adding terms:

u(x)=u0+u1+u2+=x22x520+x8160+u(x) = u_0 + u_1 + u_2 + \dots = \frac{x^2}{2} – \frac{x^5}{20} + \frac{x^8}{160} + \dots

5. Properties of the Method

Advantages:

  • Preserves the original structure of the equation.
  • Provides an analytical solution in the form of a convergent series.
  • Avoids discretization, linearization, or perturbation.

Challenges:

  • Computational effort increases with highly nonlinear terms.
  • Requires symbolic computation for higher-order Adomian polynomials.

6. Applications of ADM

The ADM is widely applied in various fields, including:

  • Nonlinear dynamics and chaos theory.
  • Fluid mechanics and heat transfer.
  • Quantum mechanics and wave equations.
  • Population dynamics and ecological modeling.

This method remains a robust tool for tackling nonlinear and complex problems while providing analytical insights.

References

  1. Polyanin, A. D., & Manzhirov, A. V. (2008). Handbook of Integral Equations. CRC Press.
  2. Brunner, H. (2004). Collocation Methods for Volterra Integral and Related Functional Differential Equations. Cambridge University Press.
  3. Atkinson, K. E. (1997). The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press.

Leave a Comment