Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shape_value*fe_values return Tensor<1,1> #17023

Open
saitoasukakawaii opened this issue May 15, 2024 · 2 comments
Open

shape_value*fe_values return Tensor<1,1> #17023

saitoasukakawaii opened this issue May 15, 2024 · 2 comments

Comments

@saitoasukakawaii
Copy link

saitoasukakawaii commented May 15, 2024

a simple 1D problem like follow:

\frac{du}{dx}=1

in dealii, it can be simply realized by follow code:

        for (const unsigned int i : fe_values.dof_indices())
          {
            for (const unsigned int j : fe_values.dof_indices())
            {
              cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
                 fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
                 fe_values.JxW(q_index));
		}

However in 1D situation, shape_value*shape_grad return a Tensor<1,1>, and above code can not be compiled. so I have to do follow:

        for (const unsigned int i : fe_values.dof_indices())
          {
            for (const unsigned int j : fe_values.dof_indices())
            {
              auto aa = (fe_values.shape_value(i, q_index) * // phi_i(x_q)
                 fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
                 fe_values.JxW(q_index));
              cell_matrix(i, j) += aa[0];
		}

Is this a issue, and if so, does it need to be optimized?

@bangerth
Copy link
Member

@saitoasukakawaii Conceptually, the derivative in 1d is a tensor of dimension 1, which in the equation you take the dot product with another tensor that is simply [+1] to indicate that the derivative is to be interpreted as a quantity moving forward (like time). So the code you show might be written as

        for (const unsigned int i : fe_values.dof_indices())
          {
            for (const unsigned int j : fe_values.dof_indices())
            {
              cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
                 (Tensor<1,1>(1) * fe_values.shape_grad(j, q_index)) * // [+1] \cdot grad phi_j(x_q)
                 fe_values.JxW(q_index));
		}

which is of course equivalent to saying

        for (const unsigned int i : fe_values.dof_indices())
          {
            for (const unsigned int j : fe_values.dof_indices())
            {
              cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
                 fe_values.shape_grad(j, q_index)[0] * // d/dx phi_j(x_q)
                 fe_values.JxW(q_index));
		}

Does that solve the problem?

@bangerth
Copy link
Member

As far as optimization: I don't think any of the variants will produce code for which you can measure any difference in performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants