Fix (likely) typo in glGetTextureSubImage height calculation

The old code used "test_textures[i].id == GL_TEXTURE_1D", but
test_texturess::id contains the texture ID, not the texture target.  As
a result, the GL_TEXTURE_1D test always got the error that it expected,
but it got it for the wrong reason.

Components: OpenGL

VK-GL-CTS issue: 912

Affects:
KHR-GL46.get_texture_sub_image.errors_test

Change-Id: I4d3de1ccbd3b0cc554af23ceae16d590c82e81d4
diff --git a/external/openglcts/modules/gl/gl4cGetTextureSubImageTests.cpp b/external/openglcts/modules/gl/gl4cGetTextureSubImageTests.cpp
index 47a9fc6..57c3487 100644
--- a/external/openglcts/modules/gl/gl4cGetTextureSubImageTests.cpp
+++ b/external/openglcts/modules/gl/gl4cGetTextureSubImageTests.cpp
@@ -520,7 +520,7 @@
 	for (glw::GLuint i = 0; i < test_textures_size; ++i)
 	{
 		m_gl_GetTextureSubImage(test_textures[i].id, 0, 0, 0, 1, s_texture_data_width,
-								(test_textures[i].id == GL_TEXTURE_1D) ? 1 : s_texture_data_height, 2, GL_RGBA,
+								(test_textures[i].id == m_texture_1D) ? 1 : s_texture_data_height, 2, GL_RGBA,
 								GL_UNSIGNED_BYTE, s_destination_buffer_size, m_destination_buffer);
 
 		glw::GLint error_value	 = gl.getError();