You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in dealii, it can be simply realized by follow code:
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
{
cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
fe_values.JxW(q_index));
}
However in 1D situation, shape_value*shape_grad return a Tensor<1,1>, and above code can not be compiled. so I have to do follow:
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
{
auto aa = (fe_values.shape_value(i, q_index) * // phi_i(x_q)
fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
fe_values.JxW(q_index));
cell_matrix(i, j) += aa[0];
}
Is this a issue, and if so, does it need to be optimized?
The text was updated successfully, but these errors were encountered:
@saitoasukakawaii Conceptually, the derivative in 1d is a tensor of dimension 1, which in the equation you take the dot product with another tensor that is simply [+1] to indicate that the derivative is to be interpreted as a quantity moving forward (like time). So the code you show might be written as
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
{
cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
(Tensor<1,1>(1) * fe_values.shape_grad(j, q_index)) * // [+1] \cdot grad phi_j(x_q)
fe_values.JxW(q_index));
}
which is of course equivalent to saying
for (const unsigned int i : fe_values.dof_indices())
{
for (const unsigned int j : fe_values.dof_indices())
{
cell_matrix(i, j) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
fe_values.shape_grad(j, q_index)[0] * // d/dx phi_j(x_q)
fe_values.JxW(q_index));
}
a simple 1D problem like follow:
\frac{du}{dx}=1
in dealii, it can be simply realized by follow code:
However in 1D situation, shape_value*shape_grad return a Tensor<1,1>, and above code can not be compiled. so I have to do follow:
Is this a issue, and if so, does it need to be optimized?
The text was updated successfully, but these errors were encountered: