WWW.MAXELINE.HU Kft.  |  ÁRAJÁNLAT KÉRÉS  |  Elérhetőség, telefon, mobil   |  
Kapcsolat, elérhetőség:
email, cím, telefonszám
   |  
Honlap térkép,
tartalmi áttekintés
 Honlap térkép  |  
Keresés a honlap tartalmában
az MXCMS8 belső keresőjével
      
 // www.MaXeline.hu / Extrák / Keresőoptimalizálás / SEO és egyéb tudásbázis, fogalomtár / OpenGL 3.0 specification (english)

OpenGL 3.0 specification (english)


The OpenGL RGraphics System:

A Specification(Version 3.0 -August 11, 2008)

Mark SegalKurt Akeley

Editor (version 1.1): Chris FrazierEditor (versions 1.2-3.0): Jon LeechEditor (version 2.0): Pat Brown

c

Copyright 2006-2008 The Khronos Group Inc. All Rights Reserved.

This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, repub-lished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part.

Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be re-formatted AS LONG AS the contents of the specification are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group web-site should be included whenever possible with specification distributions.

Khronos Group makes no, and expressly disclaims any, representations or war-ranties, express or implied, regarding this specification, including, without limita-tion, any implied warranties of merchantability or fitness for a particular purpose or non-infringement of any intellectual property. Khronos Group makes no, and expressly disclaims any, warranties, express or implied, regarding the correctness, accuracy, completeness, timeliness, and reliability of the specification. Under no circumstances will the Khronos Group, or any of its Promoters, Contributors or Members or their respective partners, officers, directors, employees, agents or rep-resentatives be liable for any damages, whether direct, indirect, special or conse-quential damages for lost revenues, lost profits, or otherwise, arising from or in connection with these materials.

Khronos is a trademark of The Khronos Group Inc. OpenGL is a registered trade-mark, and OpenGL ES is a trademark, of Silicon Graphics, Inc.

Contents

1 Introduction 1

1.1 FormattingofOptionalFeatures .................. 1

1.2 WhatistheOpenGLGraphicsSystem? . . . . . . . . . . . . . . 1

1.3 Programmer’sViewofOpenGL .................. 2

1.4 Implementor’sViewofOpenGL .................. 2

1.5 OurView .............................. 3

1.6 TheDeprecationModel....................... 3

1.7 CompanionDocuments ....................... 3

1.7.1 OpenGLShadingLanguage ................ 3

1.7.2 WindowSystemBindings ................. 3

2 OpenGL Operation 5

2.1 OpenGLFundamentals ....................... 5

2.1.1 Floating-PointComputation . . . . . . . . . . . . . . . . 7

2.1.2 16-Bit Floating-Point Numbers . . . . . . . . . . . . . . 8

2.1.3 Unsigned 11-Bit Floating-Point Numbers . . . . . . . . . 9

2.1.4 Unsigned 10-Bit Floating-Point Numbers . . . . . . . . . 9

2.2 GLState............................... 10

2.2.1 SharedObjectState..................... 11

2.3 GLCommandSyntax ........................ 11

2.4 BasicGLOperation......................... 13

2.5 GLErrors .............................. 15

2.6 Begin/EndParadigm ........................ 16

2.6.1 BeginandEnd ....................... 19

2.6.2 PolygonEdges ....................... 23

2.6.3 GL Commands within Begin/End ............. 24

2.7 VertexSpecification......................... 24

2.8 VertexArrays ............................ 28

2.9 BufferObjects............................ 38

i

2.9.1 VertexArraysinBufferObjects . . . . . . . . . . . . . . 45

2.9.2 ArrayIndicesinBufferObjects . . . . . . . . . . . . . . 46

2.9.3 BufferObjectState ..................... 47

2.10VertexArrayObjects ........................ 47

2.11Rectangles.............................. 48

2.12CoordinateTransformations .................... 49

2.12.1 ControllingtheViewport .................. 50

2.12.2 Matrices........................... 51

2.12.3 NormalTransformation................... 57

2.12.4 Generating Texture Coordinates . . . . . . . . . . . . . . 59

2.13AsynchronousQueries ....................... 61

2.14ConditionalRendering ....................... 62

2.15TransformFeedback ........................ 63

2.16PrimitiveQueries .......................... 67

2.17Clipping ............................... 67

2.18CurrentRasterPosition ....................... 70

2.19ColorsandColoring......................... 73

2.19.1 Lighting........................... 75

2.19.2 Lighting Parameter Specification . . . . . . . . . . . . . . 79

2.19.3 ColorMaterial ....................... 83

2.19.4 LightingState........................ 83

2.19.5 ColorIndexLighting .................... 84

2.19.6 ClampingorMasking ................... 85

2.19.7 Flatshading......................... 85

2.19.8 Color and Associated Data Clipping . . . . . . . . . . . . 86

2.19.9 FinalColorProcessing ................... 87

2.20VertexShaders ........................... 87

2.20.1 ShaderObjects ....................... 88

2.20.2 ProgramObjects ...................... 89

2.20.3 ShaderVariables ...................... 91

2.20.4 ShaderExecution ...................... 103

2.20.5 RequiredState ....................... 109

3 Rasterization 111

3.1 Discarding Primitives Before Rasterization . . . . . . . . . . . . 113

3.2 Invariance .............................. 113

3.3 Antialiasing............................. 113

3.3.1 Multisampling ....................... 114

3.4 Points ................................ 116

3.4.1 BasicPointRasterization.................. 118

3.4.2 PointRasterizationState .................. 122

3.4.3 Point Multisample Rasterization . . . . . . . . . . . . . . 122

3.5 LineSegments ........................... 122

3.5.1 Basic Line Segment Rasterization . . . . . . . . . . . . . 123

3.5.2 OtherLineSegmentFeatures. . . . . . . . . . . . . . . . 125

3.5.3 LineRasterizationState .................. 128

3.5.4 Line Multisample Rasterization . . . . . . . . . . . . . . 128

3.6 Polygons .............................. 129

3.6.1 BasicPolygonRasterization . . . . . . . . . . . . . . . . 129

3.6.2 Stippling .......................... 131

3.6.3 Antialiasing......................... 132

3.6.4 Options Controlling Polygon Rasterization . . . . . . . . 132

3.6.5 DepthOffset ........................ 133

3.6.6 Polygon Multisample Rasterization . . . . . . . . . . . . 134

3.6.7 PolygonRasterizationState . . . . . . . . . . . . . . . . 135

3.7 PixelRectangles........................... 135

3.7.1 Pixel Storage Modes and Pixel Buffer Objects . . . . . . . 135

3.7.2 TheImagingSubset .................... 137

3.7.3 PixelTransferModes.................... 137

3.7.4 Rasterization of Pixel Rectangles . . . . . . . . . . . . . 148

3.7.5 PixelTransferOperations ................. 162

3.7.6 Pixel Rectangle Multisample Rasterization . . . . . . . . 172

3.8 Bitmaps............................... 173

3.9 Texturing .............................. 175

3.9.1 TextureImageSpecification . . . . . . . . . . . . . . . . 176

3.9.2 Alternate Texture Image Specification Commands . . . . 191

3.9.3 CompressedTextureImages . . . . . . . . . . . . . . . . 196

3.9.4 TextureParameters ..................... 200

3.9.5 DepthComponentTextures ................ 202

3.9.6 CubeMapTextureSelection . . . . . . . . . . . . . . . . 202

3.9.7 TextureMinification .................... 203

3.9.8 TextureMagnification ................... 212

3.9.9 Combined Depth/Stencil Textures . . . . . . . . . . . . . 212

3.9.10 TextureCompleteness ................... 212

3.9.11 TextureStateandProxyState . . . . . . . . . . . . . . . 214

3.9.12 TextureObjects ....................... 215

3.9.13 Texture Environments and Texture Functions . . . . . . . 218

3.9.14 TextureComparisonModes . . . . . . . . . . . . . . . . 221

3.9.15 sRGBTextureColorConversion . . . . . . . . . . . . . . 224

3.9.16 Shared Exponent Texture Color Conversion . . . . . . . . 226

3.9.17 TextureApplication..................... 226

3.10ColorSum.............................. 227

3.11Fog ................................. 229

3.12FragmentShaders .......................... 230

3.12.1 ShaderVariables ...................... 231

3.12.2 ShaderExecution ...................... 232

3.13AntialiasingApplication ...................... 236

3.14MultisamplePointFade....................... 236

4 Per-Fragment Operations and the Framebuffer 238

4.1 Per-FragmentOperations ...................... 240

4.1.1 PixelOwnershipTest .................... 240

4.1.2 ScissorTest ......................... 241

4.1.3 Multisample Fragment Operations . . . . . . . . . . . . . 241

4.1.4 AlphaTest ......................... 242

4.1.5 StencilTest ......................... 243

4.1.6 DepthBufferTest...................... 245

4.1.7 OcclusionQueries ..................... 245

4.1.8 Blending .......................... 246

4.1.9 sRGBConversion ..................... 251

4.1.10 Dithering .......................... 251

4.1.11 LogicalOperation ..................... 252

4.1.12 Additional Multisample Fragment Operations . . . . . . . 253

4.2 WholeFramebufferOperations ................... 254

4.2.1 SelectingaBufferforWriting . . . . . . . . . . . . . . . 254

4.2.2 FineControlofBufferUpdates . . . . . . . . . . . . . . 258

4.2.3 ClearingtheBuffers .................... 260

4.2.4 TheAccumulationBuffer ................. 263

4.3 Drawing,Reading,andCopyingPixels. . . . . . . . . . . . . . . 264
4.3.1 Writing to the Stencil or Depth/Stencil Buffers . . . . . . 265

4.3.2 ReadingPixels ....................... 265

4.3.3 CopyingPixels ....................... 273

4.3.4 PixelDraw/ReadState ................... 277

4.4 FramebufferObjects ........................ 277

4.4.1 Binding and Managing Framebuffer Objects . . . . . . . . 278

4.4.2 Attaching Images to Framebuffer Objects . . . . . . . . . 281

4.4.3 Rendering When an Image of a Bound Texture Object is AlsoAttachedtotheFramebuffer . . . . . . . . . . . . . 288

4.4.4 FramebufferCompleteness ................. 289

4.4.5 Effects of Framebuffer State on Framebuffer Dependent Values............................ 294

4.4.6 Mapping between Pixel and Element in Attached Image . 295

5 Special Functions 296

5.1 Evaluators .............................. 296

5.2 Selection .............................. 302

5.3 Feedback .............................. 304

5.4 DisplayLists ............................ 306

5.5 FlushandFinish........................... 311

5.6 Hints................................. 312

6 State and State Requests 314

6.1 QueryingGLState ......................... 314

6.1.1 SimpleQueries ....................... 314

6.1.2 DataConversions ...................... 315

6.1.3 EnumeratedQueries .................... 316

6.1.4 TextureQueries....................... 322

6.1.5 StippleQuery ........................ 324

6.1.6 ColorMatrixQuery..................... 324

6.1.7 ColorTableQuery ..................... 325

6.1.8 ConvolutionQuery ..................... 325

6.1.9 HistogramQuery ...................... 326

6.1.10 MinmaxQuery ....................... 327

6.1.11 PointerandStringQueries ................. 328

6.1.12 AsynchronousQueries ................... 329

6.1.13 BufferObjectQueries ................... 331

6.1.14 VertexArrayObjectQueries . . . . . . . . . . . . . . . . 332

6.1.15 ShaderandProgramQueries . . . . . . . . . . . . . . . . 332

6.1.16 FramebufferObjectQueries . . . . . . . . . . . . . . . . 337

6.1.17 RenderbufferObjectQueries . . . . . . . . . . . . . . . . 337

6.1.18 SavingandRestoringState................. 337

6.2 StateTables ............................. 340

A Invariance 391

A.1 Repeatability ............................ 391

A.2 Multi-passAlgorithms ....................... 392

A.3 InvarianceRules........................... 392

A.4 WhatAllThisMeans ........................ 394

Version 3.0 -August 11, 2008

B Corollaries

C Compressed Texture Image Formats

C.1 RGTC Compressed Texture Image Formats

C.1.1 Format COMPRESSEDREDRGTC1

vi

395

398

............ 398
............. 399

C.1.2 Format COMPRESSEDSIGNEDREDRGTC1........ 400

C.1.3 Format COMPRESSEDRGRGTC2.............. 401

C.1.4 Format COMPRESSEDSIGNEDRGRGTC2......... 401

D Shared Objects and Multiple Contexts 402

D.1 ObjectDeletionBehavior ...................... 402

E The Deprecation Model 403

E.1 Profiles and Deprecated Features of OpenGL 3.0 . . . . . . . . . 403

F Version 1.1 409

F.1 VertexArray............................. 409

F.2 PolygonOffset ........................... 410

F.3 LogicalOperation.......................... 410

F.4 TextureImageFormats ....................... 410

F.5 TextureReplaceEnvironment.................... 410

F.6 TextureProxies ........................... 411

F.7 CopyTextureandSubtexture .................... 411

F.8 TextureObjects ........................... 411

F.9 OtherChanges ........................... 411

F.10Acknowledgements ......................... 412

G Version 1.2 414

G.1 Three-DimensionalTexturing.................... 414

G.2 BGRAPixelFormats ........................ 414

G.3 PackedPixelFormats ........................ 415

G.4 NormalRescaling .......................... 415

G.5 SeparateSpecularColor ...................... 415

G.6 TextureCoordinateEdgeClamping . . . . . . . . . . . . . . . . 415

G.7 TextureLevelofDetailControl. . . . . . . . . . . . . . . . . . . 416

G.8 VertexArrayDrawElementRange . . . . . . . . . . . . . . . . . 416

G.9 ImagingSubset ........................... 416

G.9.1 ColorTables ........................ 416

G.9.2 Convolution......................... 417

G.9.3 ColorMatrix ........................ 417

G.9.4 PixelPipelineStatistics................... 418

G.9.5 ConstantBlendColor.................... 418

G.9.6 NewBlendingEquations .................. 418

G.10Acknowledgements ......................... 418

H Version 1.2.1 422

I Version 1.3 423

I.1 CompressedTextures ........................ 423

I.2 CubeMapTextures ......................... 423

I.3 Multisample............................. 424

I.4 Multitexture ............................. 424

I.5 TextureAddEnvironmentMode .................. 425

I.6 TextureCombineEnvironmentMode . . . . . . . . . . . . . . . 425

I.7 TextureDot3EnvironmentMode.................. 425

I.8 TextureBorderClamp ....................... 425

I.9 TransposeMatrix .......................... 426

I.10 Acknowledgements ......................... 426

J Version 1.4 431

J.1 AutomaticMipmapGeneration................... 431

J.2 BlendSquaring ........................... 431

J.3 ChangestotheImagingSubset ................... 432

J.4 DepthTexturesandShadows .................... 432

J.5 FogCoordinate ........................... 432

J.6 MultipleDrawArrays........................ 432

J.7 PointParameters .......................... 433

J.8 SecondaryColor .......................... 433

J.9 SeparateBlendFunctions ...................... 433

J.10 StencilWrap ............................ 433

J.11 TextureCrossbarEnvironmentMode. . . . . . . . . . . . . . . . 433

J.12 TextureLODBias.......................... 434

J.13 TextureMirroredRepeat ...................... 434

J.14 WindowRasterPosition ...................... 434

J.15 Acknowledgements ......................... 434

K Version 1.5 437

K.1 BufferObjects............................ 437

K.2 OcclusionQueries.......................... 438

K.3 ShadowFunctions.......................... 438

K.4 ChangedTokens........................... 438

K.5 Acknowledgements ......................... 438

L Version 2.0 443

L.1 ProgrammableShading ....................... 443

L.1.1 ShaderObjects ....................... 443

L.1.2 ShaderPrograms ...................... 443

L.1.3 OpenGLShadingLanguage ................ 444

L.1.4 ChangesToShaderAPIs .................. 444

L.2 MultipleRenderTargets ...................... 444

L.3 Non-Power-Of-TwoTextures .................... 444

L.4 PointSprites............................. 445

L.5 SeparateBlendEquation ...................... 445

L.6 SeparateStencil ........................... 445

L.7 OtherChanges ........................... 445

L.8 Acknowledgements ......................... 447

M Version 2.1 449

M.1 OpenGLShadingLanguage .................... 449

M.2Non-SquareMatrices ........................ 449

M.3 PixelBufferObjects......................... 449

M.4 sRGBTextures ........................... 450

M.5 OtherChanges ........................... 450

M.6 Acknowledgements ......................... 452

N Version 3.0 455

N.1 NewFeatures ............................ 455

N.2 DeprecationModel ......................... 456

N.3 ChangedTokens........................... 457

N.4 CreditsandAcknowledgements .................. 457

O ARB Extensions 460

O.1 NamingConventions ........................ 460

O.2 Promoting Extensions to Core Features . . . . . . . . . . . . . . 461

O.3 Multitexture ............................. 461

O.4 TransposeMatrix .......................... 461

O.5 Multisample............................. 461

O.6 TextureAddEnvironmentMode .................. 462

O.7 CubeMapTextures ......................... 462

O.8 CompressedTextures ........................ 462

O.9 TextureBorderClamp ....................... 462

O.10PointParameters .......................... 462

O.11VertexBlend ............................ 462

O.12MatrixPalette ............................ 463

O.13TextureCombineEnvironmentMode . . . . . . . . . . . . . . . 463

O.14TextureCrossbarEnvironmentMode. . . . . . . . . . . . . . . . 463

O.15TextureDot3EnvironmentMode.................. 463

O.16TextureMirroredRepeat ...................... 463

O.17DepthTexture ............................ 463

O.18Shadow ............................... 463

O.19ShadowAmbient .......................... 464

O.20WindowRasterPosition ...................... 464

O.21Low-LevelVertexProgramming . . . . . . . . . . . . . . . . . . 464

O.22Low-LevelFragmentProgramming . . . . . . . . . . . . . . . . 464

O.23BufferObjects............................ 464

O.24OcclusionQueries.......................... 465

O.25ShaderObjects ........................... 465

O.26High-LevelVertexProgramming . . . . . . . . . . . . . . . . . . 465

O.27 High-Level Fragment Programming . . . . . . . . . . . . . . . . 465

O.28OpenGLShadingLanguage .................... 465

O.29Non-Power-Of-TwoTextures .................... 465

O.30PointSprites............................. 466

O.31FragmentProgramShadow ..................... 466

O.32MultipleRenderTargets ...................... 466

O.33RectangularTextures ........................ 466

O.34Floating-PointColorBuffers .................... 466

O.35Half-PrecisionFloatingPoint .................... 467

O.36Floating-PointTextures ....................... 467

O.37PixelBufferObjects......................... 467

Index 468

List of Figures

2.1 BlockdiagramoftheGL....................... 13

2.2Creation of a processed vertex from a transformed vertex and cur-rentvalues. ............................. 17

2.3 Primitiveassemblyandprocessing. . . . . . . . . . . . . . . . . . 19

2.4 Triangle strips, fans, and independent triangles. . . . . . . . . . . 21

2.5 Quadrilateral strips and independent quadrilaterals. . . . . . . . . 22

2.6 Vertextransformationsequence. . . . . . . . . . . . . . . . . . . 49

2.7 Currentrasterposition. ....................... 71

2.8 ProcessingofRGBAcolors. .................... 73

2.9 Processingofcolorindices...................... 73

2.10ColorMaterialoperation. ...................... 83

3.1 Rasterization. ............................ 111

3.2 Rasterization of non-antialiased wide points. . . . . . . . . . . . . 118

3.3 Rasterization of antialiased wide points. . . . . . . . . . . . . . . 118

3.4 Visualization of Bresenham’s algorithm. . . . . . . . . . . . . . . 123

3.5 Rasterization of non-antialiased wide lines. . . . . . . . . . . . . 126

3.6 The region used in rasterizing an antialiased line segment. . . . . 127

3.7Operation of DrawPixels. ..................... 148

3.8 Selectingasubimagefromanimage . . . . . . . . . . . . . . . . 153

3.9 Abitmapanditsassociatedparameters. . . . . . . . . . . . . . . 173

3.10 A texture image and the coordinates used to access it. . . . . . . . 189

3.11Multitexturepipeline. ........................ 227

4.1 Per-fragmentoperations. ...................... 240

4.2Operation of ReadPixels....................... 265

4.3Operation of CopyPixels....................... 273

5.1 MapEvaluation............................ 298

5.2 Feedbacksyntax. .......................... 307

x

List of Tables

2.1 GLcommandsuffixes........................ 12

2.2 GLdatatypes ............................ 14

2.3 SummaryofGLerrors ....................... 17

2.4 Vertex array sizes (values per vertex) and data types . . . . . . . . 30

2.5 Variables that direct the execution of InterleavedArrays...... 37

2.6 Buffer object parameters and their values. . . . . . . . . . . . . . 39

2.7 Bufferobjectinitialstate. ...................... 41

2.8 Buffer object state set by MapBuffer. ............... 42

2.9 Transformfeedbackmodes ..................... 64

2.10Componentconversions....................... 74

2.11 Summaryoflightingparameters. . . . . . . . . . . . . . . . . . . 76

2.12 Correspondence of lighting parameter symbols to names. . . . . . 81

2.13 Polygon flatshading color selection. . . . . . . . . . . . . . . . . 86

3.1 PixelStore parameters. ....................... 136

3.2 PixelTransfer parameters. ..................... 138

3.3 PixelMap parameters. ....................... 139

3.4 Colortablenames. ......................... 140

3.5 DrawPixels and ReadPixels types.................. 151

3.6 DrawPixels and ReadPixels formats. ............... 152

3.7 SwapBytesbitordering. ...................... 153

3.8 Packedpixelformats. ........................ 155

3.9UNSIGNEDBYTEformats. Bit numbers are indicated for each com-ponent. ............................... 156

3.10 UNSIGNED SHORT formats ..................... 157

3.11 UNSIGNED INT formats....................... 158

3.12Packedpixelfieldassignments. ................... 159

3.13Colortablelookup. ......................... 165

3.14 Computation of filtered color components. . . . . . . . . . . . . . 166

3.15 Conversion from RGBA, depth, and stencil pixel components to internal texture, table, or filter components. . . . . . . . . . . . . 178

3.16Sizedinternalcolorformats. .................... 183

3.18 Sized internal depth and stencil formats. . . . . . . . . . . . . . . 184

3.17 Sized internal luminance and intensity formats. . . . . . . . . . . 184

3.19 Generic and specific compressed internal formats. . . . . . . . . . 185

3.20 Textureparametersandtheirvalues. . . . . . . . . . . . . . . . . 201

3.21Selectionofcubemapimages. ................... 202

3.22 Texel location wrap mode application. . . . . . . . . . . . . . . . 206

3.23 Correspondence of filtered texture components to texture source components.............................. 220

3.24 Texture functions REPLACE, MODULATE, and DECAL........ 220

3.25 Texture functions BLENDand ADD.................. 221

3.26 COMBINE texturefunctions...................... 222

3.27 Arguments for COMBINERGBfunctions............... 223

3.28 Arguments for COMBINEALPHAfunctions. ............ 223

3.29 Depth texture comparison functions. . . . . . . . . . . . . . . . . 225

4.1 RGBandAlphablendequations. .................. 248

4.2 Blendingfunctions. ......................... 250

4.3Arguments to LogicOp and their corresponding operations. . . . . 253

4.4 Buffer selection for the default framebuffer . . . . . . . . . . . . 256

4.5 Buffer selection for a framebuffer object . . . . . . . . . . . . . . 256

4.6DrawBuffers buffer selection for the default framebuffer . . . . . 257

4.7PixelStore parameters. ....................... 267

4.8ReadPixels indexmasks. ...................... 271

4.9ReadPixels GL data types and reversed component conversion for-mulas................................. 272

4.10 Effective ReadPixels format for DEPTHSTENCILCopyPixels op-eration. ............................... 275

4.11 Correspondence of renderbuffer sized to base internal formats. . . 283

4.12Framebufferattachmentpoints. ................... 285

5.1Values specified by the target to Map1. .............. 297

5.2 Correspondence of feedback type to number of values per vertex. . 306

5.3 Hinttargetsanddescriptions .................... 313

6.1 Texture, table, and filter return values. . . . . . . . . . . . . . . . 323

6.2 Attributegroups........................... 339

6.3 StateVariableTypes......................... 341

6.4 GL Internal begin-end state variables (inaccessible) . . . . . . . . 342

6.5 CurrentValuesandAssociatedData . . . . . . . . . . . . . . . . 343

6.6 VertexArrayObjectState...................... 344

6.7 VertexArrayObjectState(cont.) . . . . . . . . . . . . . . . . . . 345

6.8 VertexArrayObjectState(cont.) . . . . . . . . . . . . . . . . . . 346

6.9 VertexArrayObjectState(cont.) . . . . . . . . . . . . . . . . . . 347

6.10 Vertex Array Data (not in Vertex Array objects) . . . . . . . . . . 348

6.11BufferObjectState ......................... 349

6.12Transformationstate ........................ 350

6.13Coloring............................... 351

6.14 Lighting (see also table 2.11 for defaults) . . . . . . . . . . . . . 352

6.15Lighting(cont.) ........................... 353

6.16Rasterization ............................ 354

6.17Rasterization(cont.)......................... 355

6.18Multisampling............................ 356

6.19 Textures (state per texture unit and binding point) . . . . . . . . . 357

6.20 Textures(statepertextureobject). . . . . . . . . . . . . . . . . . 358

6.21 Textures(statepertextureimage). . . . . . . . . . . . . . . . . . 359

6.22 TextureEnvironmentandGeneration. . . . . . . . . . . . . . . . 360

6.23 Texture Environment and Generation (cont.) . . . . . . . . . . . . 361

6.24PixelOperations........................... 362

6.25PixelOperations(cont.)....................... 363

6.26FramebufferControl ........................ 364

6.27 Framebuffer (state per target binding point) . . . . . . . . . . . . 365

6.28 Framebuffer (state per framebuffer object) . . . . . . . . . . . . . 366

6.29 Framebuffer (state per attachment point) . . . . . . . . . . . . . . 367

6.30 Renderbuffer (state per target and binding point) . . . . . . . . . . 368

6.31 Renderbuffer (state per renderbuffer object) . . . . . . . . . . . . 369

6.32Pixels ................................ 370

6.33Pixels(cont.) ............................ 371

6.34Pixels(cont.) ............................ 372

6.35Pixels(cont.) ............................ 373

6.36Pixels(cont.) ............................ 374

6.37Pixels(cont.) ............................ 375

6.38 Evaluators (GetMap takesamapname) .............. 376

6.39ShaderObjectState ......................... 377

6.40ProgramObjectState ........................ 378

6.41ProgramObjectState(cont.) .................... 379

6.42VertexShaderState ......................... 380

6.43QueryObjectState ......................... 381

6.44TransformFeedbackState ..................... 382

6.45Hints................................. 383

6.46 ImplementationDependentValues . . . . . . . . . . . . . . . . . 384

6.47 Implementation Dependent Values (cont.) . . . . . . . . . . . . . 385

6.48 Implementation Dependent Values (cont.) . . . . . . . . . . . . . 386

6.49 Implementation Dependent Values (cont.) . . . . . . . . . . . . . 387

6.50 Implementation Dependent Values (cont.) . . . . . . . . . . . . . 388

6.51FramebufferDependentValues ................... 389

6.52Miscellaneous ............................ 390

K.1 Newtokennames .......................... 439

N.1 Newtokennames .......................... 457

Chapter 1

Introduction

This document describes the OpenGL graphics system: what it is, how it acts, and what is required to implement it. We assume that the reader has at least a rudi-mentary understanding of computer graphics. This means familiarity with the es-sentials of computer graphics algorithms as well as familiarity with basic graphics hardware and associated terms.

1.1 Formatting of Optional Features

Starting with version 1.2 of OpenGL, some features in the specification are consid-ered optional; an OpenGL implementation may or may not choose to provide them (see section 3.7.2).

Portions of the specification which are optional are so described where the optional features are first defined (see section 3.7.2). State table entries which are optional are typeset against a gray background .

1.2 What is the OpenGL Graphics System?

OpenGL (for “Open Graphics Library”) is a software interface to graphics hard-ware. The interface consists of a set of several hundred procedures and functions that allow a programmer to specify the objects and operations involved in produc-ing high-quality graphical images, specifically color images of three-dimensional objects.

Most of OpenGL requires that the graphics hardware contain a framebuffer. Many OpenGL calls pertain to drawing objects such as points, lines, polygons, and bitmaps, but the way that some of this drawing occurs (such as when antialiasing

1

1.3. PROGRAMMER’S VIEW OF OPENGL 2

or texturing is enabled) relies on the existence of a framebuffer. Further, some of OpenGL is specifically concerned with framebuffer manipulation.

1.3 Programmer’s View of OpenGL

To the programmer, OpenGL is a set of commands that allow the specification of geometric objects in two or three dimensions, together with commands that control how these objects are rendered into the framebuffer. For the most part, OpenGL provides an immediate-mode interface, meaning that specifying an object causes it to be drawn.

A typical program that uses OpenGL begins with calls to open a window into the framebuffer into which the program will draw. Then, calls are made to allocate a GL context and associate it with the window. Once a GL context is allocated, the programmer is free to issue OpenGL commands. Some calls are used to draw simple geometric objects (i.e. points, line segments, and polygons), while others affect the rendering of these primitives including how they are lit or colored and how they are mapped from the user’s two-or three-dimensional model space to the two-dimensional screen. There are also calls to effect direct control of the framebuffer, such as reading and writing pixels.

1.4 Implementor’s View of OpenGL

To the implementor, OpenGL is a set of commands that affect the operation of graphics hardware. If the hardware consists only of an addressable framebuffer, then OpenGL must be implemented almost entirely on the host CPU. More typi-cally, the graphics hardware may comprise varying degrees of graphics accelera-tion, from a raster subsystem capable of rendering two-dimensional lines and poly-gons to sophisticated floating-point processors capable of transforming and com-puting on geometric data. The OpenGL implementor’s task is to provide the CPU software interface while dividing the work for each OpenGL command between the CPU and the graphics hardware. This division must be tailored to the available graphics hardware to obtain optimum performance in carrying out OpenGL calls.

OpenGL maintains a considerable amount of state information. This state con-trols how objects are drawn into the framebuffer. Some of this state is directly available to the user: he or she can make calls to obtain its value. Some of it, how-ever, is visible only by the effect it has on what is drawn. One of the main goals of this specification is to make OpenGL state information explicit, to elucidate how it changes, and to indicate what its effects are.

1.5. OUR VIEW 3

1.5 Our View

We view OpenGL as a pipeline having some programmable stages and some state-driven stages that control a set of specific drawing operations. This model should engender a specification that satisfies the needs of both programmers and imple-mentors. It does not, however, necessarily provide a model for implementation. An implementation must produce results conforming to those produced by the speci-fied methods, but there may be ways to carry out a particular computation that are more efficient than the one specified.

1.6 The Deprecation Model

GL features marked as deprecated in one version of the specification are expected to be removed in a future version, allowing applications time to transition away from use of deprecated features. The deprecation model is described in more detail, together with a summary of the commands and state deprecated from this version of the API, in appendix E.

1.7 Companion Documents

1.7.1 OpenGL Shading Language

This specification should be read together with a companion document titled The OpenGL Shading Language. The latter document (referred to as the OpenGL Shad-ing Language Specification hereafter) defines the syntax and semantics of the pro-gramming language used to write vertex and fragment shaders (see sections 2.20 and 3.12). These sections may include references to concepts and terms (such as shading language variable types) defined in the companion document.

OpenGL 3.0 implementations are guaranteed to support at least versions 1.10, 1.20, and 1.30 of the shading language, although versions 1.10 and 1.20 are dep-recated in a forward-compatible context. The actual version supported may be queried as described in section 6.1.11.

1.7.2 Window System Bindings

OpenGL requires a companion API to create and manage graphics contexts, win-dows to render into, and other resources beyond the scope of this Specification. There are several such APIs supporting different operating and window systems.

OpenGL Graphics with the X Window System, also called the “GLX Specifica-tion”, describes the GLX API for use of OpenGL in the X Window System. It is

1.7. COMPANION DOCUMENTS

primarily directed at Linux and Unix systems, but GLX implementations also exist for Microsoft Windows, MacOS X, and some other platforms where X is avail-able. The GLX Specification is available in the OpenGL Extension Registry (see appendix O).

The WGL API supports use of OpenGL with Microsoft Windows. WGL is documented in Microsoft’s MSDN system, although no full specification exists.

Several APIs exist supporting use of OpenGL with Quartz, the MacOS X win-dow system, including CGL, AGL, and NSGLView. These APIs are documented on Apple’s developer website.

The Khronos Native Platform Graphics Interface or “EGL Specification” de-scribes the EGL API for use of OpenGL ES on mobile and embedded devices. EGL implementations may be available supporting OpenGL as well. The EGL Specification is available in the Khronos Extension Registry at URL

http://www.khronos.org/registry/egl

Chapter 2

OpenGL Operation

2.1 OpenGL Fundamentals

OpenGL (henceforth, the “GL”) is concerned only with rendering into a frame-buffer (and reading values stored in that framebuffer). There is no support for other peripherals sometimes associated with graphics hardware, such as mice and keyboards. Programmers must rely on other mechanisms to obtain user input.

The GL draws primitives subject to a number of selectable modes and shader programs. Each primitive is a point, line segment, polygon, or pixel rectangle. Each mode may be changed independently; the setting of one does not affect the settings of others (although many modes may interact to determine what eventually ends up in the framebuffer). Modes are set, primitives specified, and other GL operations described by sending commands in the form of function or procedure calls.

Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of an edge, or a corner of a polygon where two edges meet. Data (consisting of positional coordinates, colors, normals, and texture coordinates) are associated with a vertex and each vertex is processed independently, in order, and in the same way. The only exception to this rule is if the group of vertices must be clipped so that the indicated primitive fits within a specified region; in this case vertex data may be modified and new vertices created. The type of clipping depends on which primitive the group of vertices represents.

Commands are always processed in the order in which they are received, al-though there may be an indeterminate delay before the effects of a command are realized. This means, for example, that one primitive must be drawn completely before any subsequent one can affect the framebuffer. It also means that queries and pixel read operations return state consistent with complete execution of all

5

2.1. OPENGL FUNDAMENTALS

previously invoked GL commands, except where explicitly specified otherwise. In general, the effects of a GL command on either GL modes or the framebuffer must be complete before any subsequent command can have any such effects.

In the GL, data binding occurs on call. This means that data passed to a com-mand are interpreted when that command is received. Even if the command re-quires a pointer to data, those data are interpreted when the call is made, and any subsequent changes to the data have no effect on the GL (unless the same pointer is used in a subsequent command).

The GL provides direct control over the fundamental operations of 3D and 2D graphics. This includes specification of such parameters as vertex and frag-ment shaders, transformation matrices, lighting equation coefficients, antialiasing methods, and pixel update operators. It does not provide a means for describing or modeling complex geometric objects. Another way to describe this situation is to say that the GL provides mechanisms to describe how complex geometric ob-jects are to be rendered rather than mechanisms to describe the complex objects themselves.

The model for interpretation of GL commands is client-server. That is, a pro-gram (the client) issues commands, and these commands are interpreted and pro-cessed by the GL (the server). The server may or may not operate on the same computer as the client. In this sense, the GL is “network-transparent.” A server may maintain a number of GL contexts, each of which is an encapsulation of cur-rent GL state. A client may choose to connect to any one of these contexts. Issuing GL commands when the program is not connected to a context results in undefined behavior.

The GL interacts with two classes of framebuffers: window system-provided and application-created. There is at most one window system-provided frame-buffer at any time, referred to as the default framebuffer. Application-created framebuffers, referred to as framebuffer objects, may be created as desired. These two types of framebuffer are distinguished primarily by the interface for configur-ing and managing their state.

The effects of GL commands on the default framebuffer are ultimately con-trolled by the window system, which allocates framebuffer resources, determines which portions of the default framebuffer the GL may access at any given time, and communicates to the GL how those portions are structured. Therefore, there are no GL commands to initialize a GL context or configure the default framebuffer. Similarly, display of framebuffer contents on a physical display device (including the transformation of individual framebuffer values by such techniques as gamma correction) is not addressed by the GL.

Allocation and configuration of the default framebuffer occurs outside of the GL in conjunction with the window system, using companion APIs such as GLX,

2.1. OPENGL FUNDAMENTALS 7

WGL, and CGL for GL implementations running on the X Window System, Mi-crosoft Windows, and MacOS X respectively.

Allocation and initialization of GL contexts is also done using these companion APIs. GL contexts can typically be associated with different default framebuffers, and some context state is determined at the time this association is performed.

It is possible to use a GL context without a default framebuffer, in which case a framebuffer object must be used to perform all rendering. This is useful for applications neeting to perform offscreen rendering.

The GL is designed to be run on a range of graphics platforms with varying graphics capabilities and performance. To accommodate this variety, we specify ideal behavior instead of actual behavior for certain GL operations. In cases where deviation from the ideal is allowed, we also specify the rules that an implemen-tation must obey if it is to approximate the ideal behavior usefully. This allowed variation in GL behavior implies that two distinct GL implementations may not agree pixel for pixel when presented with the same input even when run on identi-cal framebuffer configurations.

Finally, command names, constants, and types are prefixed in the GL (by gl, GL, and GL, respectively in C) to reduce name clashes with other packages. The prefixes are omitted in this document for clarity.

2.1.1 Floating-Point Computation

The GL must perform a number of floating-point operations during the course of its operation. In some cases, the representation and/or precision of such oper-ations is defined or limited; by the OpenGL Shading Language Specification for operations in shaders, and in some cases implicitly limited by the specified format of vertex, texture, or renderbuffer data consumed by the GL. Otherwise, the rep-resentation of such floating-point numbers, and the details of how operations on them are performed, is not specified. We require simply that numbers’ floating-point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1part in

105 . The maximum representable magnitude of a floating-point number used to represent positional, normal, or texture coordinates must be at least 232; the max-imum representable magnitude for colors must be at least 210 . The maximum representable magnitude for all other floating-point values must be at least 232 . x0=0x=0for any non-infinite and non-NaN x. 1x=x1=x.

·· ··

x+0=0+x=x. 00 =1. (Occasionally further requirements will be specified.) Most single-precision floating-point formats meet these requirements.

The special values Infand Infencode values with magnitudes too large to be represented; the special value NaNencodes “Not A Number” values resulting

2.1. OPENGL FUNDAMENTALS

from undefined arithmetic operations such as 10 . Implementations are permitted, but not required, to support Infs and NaNs in their floating-point computations.

Any representable floating-point value is legal as input to a GL command that requires floating-point data. The result of providing a value that is not a floating-point number to such a command is unspecified, but must not lead to GL interrup-tion or termination. In IEEE arithmetic, for example, providing a negative zero or a denormalized number to a GL command yields predictable results, while providing a NaN or an infinity yields unspecified results.

Some calculations require division. In such cases (including implied divisions required by vector normalizations), a division by zero produces an unspecified re-sult but must not lead to GL interruption or termination.

2.1.2 16-Bit Floating-Point Numbers

A 16-bit floating-point number has a 1-bit sign (S), a 5-bit exponent (E), and a 10-bit mantissa (M). The value Vof a 16-bit floating-point number is determined by the following:


(1)S × 0.0,E=0,M=0

× 214

M

E=0,M=0

×

× 2E15

210 ,

1+ M

210

(1)S (1)S

0 <E< 31

V =

×

,


(1)S × Inf,E=31,M=0NaN,E=31,M=0

If the floating-point number is interpreted as an unsigned 16-bit integer N, then

N mod 65536

S =

32768

N mod 32768

E =

1024 M = N mod 1024.

Any representable 16-bit floating-point value is legal as input to a GL command that accepts 16-bit floating-point data. The result of providing a value that is not a floating-point number (such as Infor NaN) to such a command is unspecified, but must not lead to GL interruption or termination. Providing a denormalized number or negative zero to GL must yield predictable results.

2.1. OPENGL FUNDAMENTALS

2.1.3 Unsigned 11-Bit Floating-Point Numbers

An unsigned 11-bit floating-point number has no sign bit, a 5-bit exponent (E), and a 6-bit mantissa (M). The value Vof an unsigned 11-bit floating-point number is determined by the following:

⎧ ⎪

0.0,E=0,M=0

M

214

E=0,M=0

×

64 ,

1+ M

64

2E15

0 <E< 31

V =

×

,


Inf,E=31,M=0NaN,E=31,M=0

If the floating-point number is interpreted as an unsigned 11-bit integer N, then

N

E =

64 M = N mod 64.

When a floating-point value is converted to an unsigned 11-bit floating-point representation, finite values are rounded to the closest representable finite value. While less accurate, implementations are allowed to always round in the direction of zero. This means negative values are converted to zero. Likewise, finite posi-tive values greater than 65024 (the maximum finite representable unsigned 11-bit floating-point value) are converted to 65024. Additionally: negative infinity is con-verted to zero; positive infinity is converted to positive infinity; and both positive and negative NaNare converted to positive NaN.

Any representable unsigned 11-bit floating-point value is legal as input to a GL command that accepts 11-bit floating-point data. The result of providing a value that is not a floating-point number (such as Infor NaN) to such a command is unspecified, but must not lead to GL interruption or termination. Providing a denormalized number to GL must yield predictable results.

2.1.4 Unsigned 10-Bit Floating-Point Numbers

An unsigned 10-bit floating-point number has no sign bit, a 5-bit exponent (E), and a 5-bit mantissa (M). The value Vof an unsigned 10-bit floating-point number is determined by the following:

2.2. GL STATE

⎧ ⎪

0.0,E=0,M=0

M

214

E=0,M=0

×

32 ,

1+ M

32

2E15

0 <E< 31

V =

×

,


Inf,E=31,M=0NaN,E=31,M=0

If the floating-point number is interpreted as an unsigned 10-bit integer N, then

N

E =

32 M = N mod 32.

When a floating-point value is converted to an unsigned 10-bit floating-point representation, finite values are rounded to the closest representable finite value. While less accurate, implementations are allowed to always round in the direction of zero. This means negative values are converted to zero. Likewise, finite posi-tive values greater than 64512 (the maximum finite representable unsigned 10-bit floating-point value) are converted to 64512. Additionally: negative infinity is con-verted to zero; positive infinity is converted to positive infinity; and both positive and negative NaNare converted to positive NaN.

Any representable unsigned 10-bit floating-point value is legal as input to a GL command that accepts 10-bit floating-point data. The result of providing a value that is not a floating-point number (such as Infor NaN) to such a command is unspecified, but must not lead to GL interruption or termination. Providing a denormalized number to GL must yield predictable results.

2.2 GL State

The GL maintains considerable state. This document enumerates each state vari-able and describes how each variable can be changed. For purposes of discussion, state variables are categorized somewhat arbitrarily by their function. Although we describe the operations that the GL performs on the framebuffer, the framebuffer is not a part of GL state.

We distinguish two types of state. The first type of state, called GL server state, resides in the GL server. The majority of GL state falls into this category. The second type of state, called GL client state, resides in the GL client. Unless otherwise specified, all state referred to in this document is GL server state; GL client state is specifically identified. Each instance of a GL context implies one

2.3. GL COMMAND SYNTAX 11

complete set of GL server state; each connection from a client to a server implies a set of both GL client state and GL server state.

While an implementation of the GL may be hardware dependent, this discus-sion is independent of the specific hardware on which a GL is implemented. We are therefore concerned with the state of graphics hardware only when it corresponds precisely to GL state.

2.2.1 Shared Object State

It is possible for groups of contexts to share certain state. Enabling such sharing between contexts is done through window system binding APIs such as those de-scribed in section 1.7.2. These APIs are responsible for creation and management of contexts, and not discussed further here. More detailed discussion of the behav-ior of shared objects is included in appendix D. Except as defined in this appendix, all state in a context is specific to that context only.

2.3 GL Command Syntax

GL commands are functions or procedures. Various groups of commands perform the same operation but differ in how arguments are supplied to them. To conve-niently accommodate this variation, we adopt a notation for describing commands and their arguments.

GL commands are formed from a name followed, depending on the particular command, by up to 4 characters. The first character indicates the number of values of the indicated type that must be presented to the command. The second character or character pair indicates the specific type of the arguments: 8-bit integer, 16-bit integer, 32-bit integer, single-precision floating-point, or double-precision floating-point. The final character, if present, is v, indicating that the command takes a pointer to an array (a vector) of values rather than a series of individual arguments. Two specific examples come from the Vertex command:

voidVertex3f( floatx,floaty,floatz );

and

voidVertex2sv( shortv[2] );

These examples show the ANSI Cdeclarations for these commands. In general, a command declaration has the form 1 1The declarations shown in this document apply to ANSI C. Languages such as C++and Ada that allow passing of argument type information admit simpler declarations and fewer entry points.

2.3. GL COMMAND SYNTAX

Letter Corresponding GLType
b byte
s short
i int
f float
d double
ub ubyte
us ushort
ui uint

Table 2.1: Correspondence of command suffix letters to GL argument types. Refer to table 2.2 for definitions of the GL types.

rtype Name{1234}{� bsifd ubus ui}{v}([args ,] T arg1 , ..., T argN[, args] );

rtype is the return type of the function. The braces ({}) enclose a series of char-acters (or character pairs) of which one is selected. indicates no character. The arguments enclosed in brackets ([args ,] and [, args]) may or may not be present. The Narguments arg1 through argNhave type T, which corresponds to one of the type letters or letter pairs as indicated in table 2.1 (if there are no letters, then the arguments’ type is given explicitly). If the final character is not v, then Nis given by the digit 1, 2, 3, or 4 (if there is no digit, then the number of arguments is fixed). If the final character is v, then only arg1 is present and it is an array of Nvalues of the indicated type. Finally, we indicate an unsignedtype by the shorthand of prepending a uto the beginning of the type name (so that, for instance, unsignedcharis abbreviated uchar).

For example,

void Normal3{fd}( Targ );

indicates the two declarations

voidNormal3f( floatarg1,floatarg2,floatarg3 );voidNormal3d( doublearg1,doublearg2,doublearg3 );

while

void Normal3{fd}v( Targ );

2.4. BASICGLOPERATION13

means the two declarations

voidNormal3fv( floatarg[3] );

voidNormal3dv( doublearg[3] );

Arguments whose type is fixed (i.e. not indicated by a suffix on the command) are of one of the GL data types summarized in table 2.2, or pointers to one of these types.

2.4 Basic GL Operation

Figure 2.1 shows a schematic diagram of the GL. Commands enter the GL on the left. Some commands specify geometric objects to be drawn while others control how the objects are handled by the various stages. Most commands may be ac-cumulated in a display list for processing by the GL at a later time. Otherwise, commands are effectively sent through a processing pipeline.

The first stage provides an efficient means for approximating curve and sur-face geometry by evaluating polynomial functions of input values. The next stage operates on geometric primitives described by vertices: points, line segments, and polygons. In this stage vertices are transformed and lit, and primitives are clipped to a viewing volume in preparation for the next stage, rasterization. The rasterizer produces a series of framebuffer addresses and values using a two-dimensional de-scription of a point, line segment, or polygon. Each fragment so produced is fed to the next stage that performs operations on individual fragments before they fi-nally alter the framebuffer. These operations include conditional updates into the framebuffer based on incoming and previously stored depth values (to effect depth buffering), blending of incoming fragment colors with stored colors, as well as masking and other logical operations on fragment values.

Finally, there is a way to bypass the vertex processing portion of the pipeline to send a block of fragments directly to the individual fragment operations, eventually causing a block of pixels to be written to the framebuffer; values may also be read back from the framebuffer or copied from one portion of the framebuffer to another. These transfers may include some type of decoding or encoding.

This ordering is meant only as a tool for describing the GL, not as a strict rule of how the GL is implemented, and we present it only as a means to organize the various operations of the GL. Objects such as curved surfaces, for instance, may be transformed before they are converted to polygons.

2.4. BASICGLOPERATION

GL Type Minimum Bit Width Description
boolean 1 Boolean
byte 8 Signed 2’s complement binary integer
ubyte 8 Unsigned binary integer
char 8 Characters making up strings
short 16 Signed 2’s complement binary integer
ushort 16 Unsigned binary integer
int 32 Signed 2’s complement binary integer
uint 32 Unsigned binary integer
sizei 32 Non-negative binary integer size
enum 32 Enumerated binary integer value
intptr ptrbits Signed 2’s complement binary integer
sizeiptr ptrbits Non-negative binary integer size
bitfield 32 Bit field
half 16 Half-precision floating-point value encoded in an unsigned scalar
float 32 Floating-point value
clampf 32 Floating-point value clamped to [0,1]
double 64 Floating-point value
clampd 64 Floating-point value clamped to [0,1]
time 64 Unsigned binary representing an ab-solute absolute or relative time inter-val. Precision is nanoseconds but ac-curacy is implementation-dependent.

Table 2.2: GL data types. GL types are not C types. Thus, for example, GL type intis referred to as GLintoutside this document, and is not necessarily equivalent to the C type int. An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however. ptrbits is the number of bits required to represent a pointer type; in other words, types intptrand sizeiptrmust be sufficiently large as to store any address.

2.5. GL ERRORS

2.5 GL Errors

The GL detects only a subset of those conditions that could be considered errors. This is because in many cases error checking would adversely impact the perfor-mance of an error-free program.

The command

enumGetError( void);

is used to obtain error information. Each detectable error is assigned a numeric code. When an error is detected, a flag is set and the code is recorded. Further errors, if they occur, do not affect this recorded code. When GetError is called, the code is returned and the flag is cleared, so that a further error will again record its code. If a call to GetError returns NOERROR, then there has been no detectable error since the last call to GetError (or since the GL was initialized).

To allow for distributed implementations, there may be several flag-code pairs. In this case, after a call to GetError returns a value other than NOERROReach subsequent call returns the non-zero code of a distinct flag-code pair (in unspecified order), until all non-NOERRORcodes have been returned. When there are no more non-NOERRORerror codes, all flags are reset. This scheme requires some positive number of pairs of a flag bit and an integer. The initial state of all flags is cleared and the initial value of all codes is NOERROR.

2.6. BEGIN/END PARADIGM 16

Table 2.3 summarizes GL errors. Currently, when an error flag is set, results of GL operation are undefined only if OUTOFMEMORYhas occurred. In other cases, the command generating the error is ignored so that it has no effect on GL state or framebuffer contents. If the generating command returns a value, it returns zero. If the generating command modifies values through a pointer argument, no change is made to these values. These error semantics apply only to GL errors, not to system errors such as memory access errors. This behavior is the current behavior; the action of the GL in the presence of errors is subject to change.

Several error generation conditions are implicit in the description of every GL command:

  • If a command that requires an enumerated value is passed a symbolic con-stant that is not one of those specified as allowable for that command, the error INVALIDENUMis generated. This is the case even if the argument is a pointer to a symbolic constant, if the value pointed to is not allowable for the given command.

  • If a negative number is provided where an argument of type sizeior sizeiptris specified, the error INVALIDVALUEis generated.

  • If memory is exhausted as a side effect of the execution of a command, the error OUTOFMEMORYmay be generated.

Otherwise, errors are generated only for conditions that are explicitly described in this specification.

2.6 Begin/End Paradigm

In the GL, most geometric objects are drawn by enclosing a series of coordinate sets that specify vertices and optionally normals, texture coordinates, and colors between Begin/End pairs. There are ten geometric objects that are drawn this way: points, line segments, line segment loops, separated line segments, polygons, triangle strips, triangle fans, separated triangles, quadrilateral strips, and separated quadrilaterals.

Each vertex is specified with two, three, or four coordinates. In addition, a current normal, multiple current texture coordinate sets, multiple current generic vertex attributes, current color, current secondary color, and current fog coor-dinate may be used in processing each vertex. Normals are used by the GL in lighting calculations; the current normal is a three-dimensional vector that may be set by sending three coordinates that specify it. Texture coordinates determine how

2.6. BEGIN/END PARADIGM

17

Error Description Offending com-mand ignored?
INVALID ENUM enum argument out of range Yes
INVALID VALUE Numeric argument out of range Yes
INVALID OPERATION Operation illegal in current state Yes
INVALID FRAMEBUFFER OPERATION Framebuffer object is not complete Yes
STACK OVERFLOW Command would cause a stack overflow Yes
STACK UNDERFLOW Command would cause a stack underflow Yes
OUT OF MEMORY Not enough memory left to exe-cute command Unknown
TABLE TOO LARGE The specified table is too large Yes

Table 2.3: Summary of GL errors

a texture image is mapped onto a primitive. Multiple sets of texture coordinates may be used to specify how multiple texture images are mapped onto a primitive. The number of texture units supported is implementation dependent but must be at least two. The number of texture units supported can be queried with the state MAXTEXTUREUNITS. Generic vertex attributes can be accessed from within ver-tex shaders (section 2.20) and used to compute values for consumption by later processing stages.

Primary and secondary colors are associated with each vertex (see section 3.10). These associated colors are either based on the current color and current secondary color or produced by lighting, depending on whether or not lighting is enabled. Texture and fog coordinates are similarly associated with each vertex. Multiple sets of texture coordinates may be associated with a vertex. Figure 2.2 summarizes the association of auxiliary data with a transformed vertex to produce a processed vertex.

The current values are part of GL state. Vertices and normals are transformed, colors may be affected or replaced by lighting, and texture coordinates are trans-formed and possibly affected by a texture coordinate generation function. The processing indicated for each current value is applied for each vertex that is sent to the GL.

The methods by which vertices, normals, texture coordinates, fog coordinate, generic attributes, and colors are sent to the GL, as well as how normals are trans-

2.6. BEGIN/END PARADIGM

2.6. BEGIN/END PARADIGM

formed and how vertices are mapped to the two-dimensional screen, are discussed later.

Before colors have been assigned to a vertex, the state required by a vertex is the vertex’s coordinates, the current normal, the current edge flag (see section 2.6.2), the current material properties (see section 2.19.2), the current fog co- ordinate, the multiple generic vertex attribute sets, and the multiple current texture coordinate sets. Because color assignment is done vertex-by-vertex, a processed vertex comprises the vertex’s coordinates, its edge flag, its fog coordinate, its as-signed colors, and its multiple texture coordinate sets.

Figure 2.3 shows the sequence of operations that builds a primitive (point, line segment, or polygon) from a sequence of vertices. After a primitive is formed, it is clipped to a viewing volume. This may alter the primitive by altering vertex coordinates, texture coordinates, and colors. In the case of line and polygon prim-itives, clipping may insert new vertices into the primitive. The vertices defining a primitive to be rasterized have texture coordinates and colors associated with them.

2.6.1 Begin and End

Vertices making up one of the supported geometric object types are specified by enclosing commands defining those vertices between the two commands

voidBegin( enummode );voidEnd( void);

2.6. BEGIN/END PARADIGM

There is no limit on the number of vertices that may be specified between a Begin and an End.

Points. A series of individual points may be specified by calling Begin with an argument value of POINTS. No special state need be kept between Begin and End in this case, since each point is independent of previous and following points.

Line Strips. A series of one or more connected line segments is specified by enclosing a series of two or more endpoints within a Begin/End pair when Begin is called with LINESTRIP. In this case, the first vertex specifies the first segment’s start point while the second vertex specifies the first segment’s endpoint and the second segment’s start point. In general, the ith vertex (for i>1) specifies the beginning of the ith segment and the end of the i1st. The last vertex specifies the end of the last segment. If only one vertex is specified between the Begin/End pair, then no primitive is generated.

The required state consists of the processed vertex produced from the last ver-tex that was sent (so that a line segment can be generated from it to the current vertex), and a boolean flag indicating if the current vertex is the first vertex.

Line Loops. Line loops, specified with the LINELOOPargument value to Begin, are the same as line strips except that a final segment is added from the final specified vertex to the first vertex. The additional state consists of the processed first vertex.

Separate Lines. Individual line segments, each specified by a pair of vertices, are generated by surrounding vertex pairs with Begin and End when the value of the argument to Begin is LINES. In this case, the first two vertices between a Begin and End pair define the first segment, with subsequent pairs of vertices each defining one more segment. If the number of specified vertices is odd, then the last one is ignored. The state required is the same as for lines but it is used differently: a vertex holding the first vertex of the current segment, and a boolean flag indicating whether the current vertex is odd or even (a segment start or end).

Polygons. A polygon is described by specifying its boundary as a series of line segments. When Begin is called with POLYGON, the bounding line segments are specified in the same way as line loops. Depending on the current state of the GL, a polygon may be rendered in one of several ways such as outlining its border or filling its interior. A polygon described with fewer than three vertices does not generate a primitive.

Only convex polygons are guaranteed to be drawn correctly by the GL. If a specified polygon is nonconvex when projected onto the window, then the rendered polygon need only lie within the convex hull of the projected vertices defining its boundary.

The state required to support polygons consists of at least two processed ver-tices (more than two are never required, although an implementation may use

2.6. BEGIN/END PARADIGM

more); this is because a convex polygon can be rasterized as its vertices arrive, before all of them have been specified. The order of the vertices is significant in lighting and polygon rasterization (see sections 2.19.1 and 3.6.1).

Triangle strips. A triangle strip is a series of triangles connected along shared edges. A triangle strip is specified by giving a series of defining vertices between a Begin/End pair when Begin is called with TRIANGLESTRIP. In this case, the first three vertices define the first triangle (and their order is significant, just as for polygons). Each subsequent vertex defines a new triangle using that point along with two vertices from the previous triangle. A Begin/End pair enclosing fewer than three vertices, when TRIANGLESTRIPhas been supplied to Begin, produces no primitive. See figure 2.4.

The state required to support triangle strips consists of a flag indicating if the first triangle has been completed, two stored processed vertices, (called vertex A and vertex B), and a one bit pointer indicating which stored vertex will be replaced with the next vertex. After a Begin(TRIANGLESTRIP), the pointer is initialized to point to vertex A. Each vertex sent between a Begin/End pair toggles the pointer. Therefore, the first vertex is stored as vertex A, the second stored as vertex B, the third stored as vertex A, and so on. Any vertex after the second one sent forms a triangle from vertex A, vertex B, and the current vertex (in that order).

Triangle fans. A triangle fan is the same as a triangle strip with one exception: each vertex after the first always replaces vertex B of the two stored vertices. The

2.6. BEGIN/END PARADIGM

vertices of a triangle fan are enclosed between Begin and End when the value of the argument to Begin is TRIANGLEFAN.

Separate Triangles. Separate triangles are specified by placing vertices be-tween Begin and End when the value of the argument to Begin is TRIANGLES. In this case, The 3i+1st, 3i+2nd, and 3i+3rd vertices (in that order) determine a triangle for each i=0,1,...,n1, where there are 3n+kvertices between the Begin and End. kis either 0, 1, or 2; if kis not zero, the final kvertices are ignored. For each triangle, vertex A is vertex 3iand vertex B is vertex 3i+1. Otherwise, separate triangles are the same as a triangle strip.

The rules given for polygons also apply to each triangle generated from a tri-angle strip, triangle fan or from separate triangles.

Quadrilateral (quad) strips. Quad strips generate a series of edge-sharing quadrilaterals from vertices appearing between Begin and End, when Begin is called with QUADSTRIP. If the mvertices between the Begin and End are v1,...,vm, where vjis the jth specified vertex, then quad ihas vertices (in or-der) v2i, v2i+1, v2i+3, and v2i+2with i=0,...,m/2. The state required is thus three processed vertices, to store the last two vertices of the previous quad along with the third vertex (the first new vertex) of the current quad, a flag to indicate when the first quad has been completed, and a one-bit counter to count members of a vertex pair. See figure 2.5.

A quad strip with fewer than four vertices generates no primitive. If the number of vertices specified for a quadrilateral strip between Begin and End is odd, the final vertex is ignored.

2.6. BEGIN/END PARADIGM

Separate Quadrilaterals Separate quads are just like quad strips except that each group of four vertices, the 4j+1st, the 4j+2nd, the 4j+3rd, and the 4j+4th, generate a single quad, for j=0,1,...,n1. The total number of vertices between Begin and End is 4n+k, where 0k 3; if kis not zero, the final kvertices are ignored. Separate quads are generated by calling Begin with the argument value QUADS.

The rules given for polygons also apply to each quad generated in a quad strip or from separate quads.

The state required for Begin and End consists of an eleven-valued integer indi-cating either one of the ten possible Begin/End modes, or that no Begin/End mode is being processed.

Calling Begin will result in an INVALIDFRAMEBUFFEROPERATIONerror if the object bound to DRAWFRAMEBUFFERBINDINGis not framebuffer complete (see section 4.4.4).

2.6.2 Polygon Edges

Each edge of each primitive generated from a polygon, triangle strip, triangle fan, separate triangle set, quadrilateral strip, or separate quadrilateral set, is flagged as either boundary or non-boundary. These classifications are used during polygon rasterization; some modes affect the interpretation of polygon boundary edges (see section 3.6.4). By default, all edges are boundary edges, but the flagging of poly- gons, separate triangles, or separate quadrilaterals may be altered by calling

voidEdgeFlag( booleanflag );

voidEdgeFlagv( boolean*flag );

to change the value of a flag bit. If flag is zero, then the flag bit is set to FALSE; if flag is non-zero, then the flag bit is set to TRUE.

When Begin is supplied with one of the argument values POLYGON, TRIANGLES, or QUADS, each vertex specified within a Begin and End pair be-gins an edge. If the edge flag bit is TRUE, then each specified vertex begins an edge that is flagged as boundary. If the bit is FALSE, then induced edges are flagged as non-boundary.

The state required for edge flagging consists of one current flag bit. Initially, the bit is TRUE. In addition, each processed vertex of an assembled polygonal primitive must be augmented with a bit indicating whether or not the edge beginning on that vertex is boundary or non-boundary.

2.7. VERTEXSPECIFICATION24

2.6.3 GL Commands within Begin/End

The only GL commands that are allowed within any Begin/End pairs are the com-mands for specifying vertex coordinates, vertex colors, normal coordinates, texture coordinates, generic vertex attributes, and fog coordinates (Vertex, Color, Sec-ondaryColor, Index, Normal, TexCoord and MultiTexCoord, VertexAttrib, FogCoord), the ArrayElement command (see section 2.8), the EvalCoord and EvalPoint commands (see section 5.1), commands for specifying lighting mate- rial parameters (Material commands; see section 2.19.2), display list invocation commands (CallList and CallLists; see section 5.4), and the EdgeFlag command. Executing any other GL command between the execution of Begin and the corre-sponding execution of End results in the error INVALIDOPERATION. Executing Begin after Begin has already been executed but before an End is executed gen-erates the INVALIDOPERATIONerror, as does executing End without a previous corresponding Begin.

Execution of the commands EnableClientState, DisableClientState, Push-ClientAttrib, PopClientAttrib, ColorPointer, FogCoordPointer, EdgeFlag-Pointer, IndexPointer, NormalPointer, TexCoordPointer, SecondaryCol-orPointer, VertexPointer, VertexAttribPointer, ClientActiveTexture, Inter-leavedArrays, and PixelStore is not allowed within any Begin/End pair, but an error may or may not be generated if such execution occurs. If an error is not gen-erated, GL operation is undefined. (These commands are described in sections 2.8, 3.7.1, and chapter 6.)

2.7 Vertex Specification

Vertices are specified by giving their coordinates in two, three, or four dimensions. This is done using one of several versions of the Vertex command:

void Vertex{234}{sifd}( Tcoords );

void Vertex{234}{sifd}v( Tcoords );

A call to any Vertex command specifies four coordinates: x, y, z, and w. The xcoordinate is the first coordinate, yis second, zis third, and wis fourth. A call to Vertex2 sets the xand ycoordinates; the zcoordinate is implicitly set to zero and the wcoordinate to one. Vertex3 sets x, y, and zto the provided values and wto one. Vertex4 sets all four coordinates, allowing the specification of an arbitrary point in projective three-space. Invoking a Vertex command outside of a Begin/End pair results in undefined behavior.

Version 3.0 -August 11, 2008

2.7. VERTEXSPECIFICATION

Current values are used in associating auxiliary data with a vertex as described in section 2.6. A current value may be changed at any time by issuing an appropri- ate command. The commands

void TexCoord{1234}{sifd}( Tcoords );

void TexCoord{1234}{sifd}v( Tcoords );

specify the current homogeneous texture coordinates, named s, t, r, and q. The TexCoord1 family of commands set the scoordinate to the provided single argu-ment while setting tand rto 0 and qto 1. Similarly, TexCoord2 sets sand tto the specified values, rto 0 and qto 1; TexCoord3 sets s, t, and r, with qset to 1, and TexCoord4 sets all four texture coordinates.

Implementations must support at least two sets of texture coordinates. The commands

void MultiTexCoord{1234}{sifd}(enumtexture,Tcoords)voidMultiTexCoord{1234}{sifd}v(enumtexture,Tcoords)

take the coordinate set to be modified as the texture parameter. texture is a symbolic constant of the form TEXTUREi, indicating that texture coordinate set iis to be modified. The constants obey TEXTUREi=TEXTURE0+i(iis in the range 0 to k1, where kis the implementation-dependent number of texture coordinate sets defined by MAXTEXTURECOORDS).

The TexCoord commands are exactly equivalent to the corresponding Multi-TexCoord commands with texture set to TEXTURE0.

Gets of CURRENTTEXTURECOORDSreturn the texture coordinate set defined by the value of ACTIVETEXTURE.

Specifying an invalid texture coordinate set for the texture argument of Multi-TexCoord results in undefined behavior.

The current normal is set using

void Normal3{bsifd}( Tcoords );

void Normal3{bsifd}v( Tcoords );

Byte, short, or integer values passed to Normal are converted to floating-point values as indicated for the corresponding (signed) type in table 2.10.

The current fog coordinate is set using

void FogCoord{fd}( Tcoord );

void FogCoord{fd}v( Tcoord );

2.7. VERTEXSPECIFICATION

There are several ways to set the current color and secondary color. The GL stores a current single-valued color index, as well as a current four-valued RGBA color and secondary color. Either the index or the color and secondary color are significant depending as the GL is in color index mode or RGBA mode. The mode selection is made when the GL is initialized.

The commands to set RGBA colors are

void Color{34}{bsifd ubusui}( Tcomponents );

void Color{34}{bsifd ubusui}v( Tcomponents );

void SecondaryColor3{bsifd ubusui}( Tcomponents );

void SecondaryColor3{bsifd ubusui}v( Tcomponents );

The Color command has two major variants: Color3 and Color4. The four value versions set all four values. The three value versions set R, G, and B to the provided values; A is set to 1.0. (The conversion of integer color components (R, G, B, and A) to floating-point values is discussed in section 2.19.)

The secondary color has only the three value versions. Secondary A is always set to 1.0.

Versions of the Color and SecondaryColor commands that take floating-point values accept values nominally between 0.0 and 1.0. 0.0 corresponds to the min-imum while 1.0 corresponds to the maximum (machine dependent) value that a component may take on in the framebuffer (see section 2.19 on colors and color-ing). Values outside [0,1]are not clamped.

The command

void Index{sifd ub}( Tindex );

void Index{sifd ub}v( Tindex );

updates the current (single-valued) color index. It takes one argument, the value to which the current color index should be set. Values outside the (machine-dependent) representable range of color indices are not clamped.

Vertex shaders (see section 2.20) can be written to access an array of 4- component generic vertex attributes in addition to the conventional attributes spec-ified previously. The first slot of this array is numbered 0, and the size of the array is specified by the implementation-dependent constant MAXVERTEXATTRIBS.

To load values into a generic shader attribute declared as a floating-point scalar, vector, or matrix, use the commands

void VertexAttrib{1234}{sfd}( uintindex,Tvalues );

void VertexAttrib{123}{sfd}v( uintindex,Tvalues );

void VertexAttrib4{bsifd ubusui}v( uintindex,Tvalues );

2.7. VERTEXSPECIFICATION

voidVertexAttrib4Nub( uintindex,Tvalues );

void VertexAttrib4N{bsi ubusui}v( uintindex,Tvalues );

The VertexAttrib4N* commands specify fixed-point values that are converted to a normalized [0,1]or [1, 1] range as shown in table 2.10, while the other com- mands specify values that are converted directly to the internal floating-point representation.

The resulting value(s) are loaded into the generic attribute at slot index, whose components are named x, y, z, and w. The VertexAttrib1* family of commands sets the x coordinate to the provided single argument while setting y and z to 0 and w to 1. Similarly, VertexAttrib2* commands set x and y to the specified values, z to 0 and w to 1; VertexAttrib3* commands set x, y, and z, with w set to 1, and VertexAttrib4* commands set all four coordinates.

The VertexAttrib* entry points may also be used to load shader attributes de-clared as a floating-point matrix. Each column of a matrix takes up one generic 4-component attribute slot out of the MAXVERTEXATTRIBSavailable slots. Ma-trices are loaded into these slots in column major order. Matrix columns are loaded in increasing slot numbers.

The resulting attribute values are undefined if the base type of the shader at-tribute at slot index is not floating-point (e.g. is signed or unsigned integer). To load values into a generic shader attribute declared as a signed or unsigned scalar or vector, use the commands

void VertexAttribI{1234}{i ui}( uintindex,Tvalues );

void VertexAttribI{1234}{i ui}v( uintindex,Tvalues );

void VertexAttribI4{bs ubus}v( uintindex,Tvalues );

These commands specify values that are extended to full signed or unsigned integers, then loaded into the generic attribute at slot index in the same fashion as described above.

The resulting attribute values are undefined if the base type of the shader at-tribute at slot index is floating-point; if the base type is integer and unsigned in-teger values are supplied (the VertexAttribI*ui, VertexAttribI*us, and Vertex-AttribI*ub commands); or if the base type is unsigned integer and signed integer values are supplied (the VertexAttribI*i, VertexAttribI*s, and VertexAttribI*b commands)

The error INVALIDVALUEis generated by VertexAttrib* if index is greater than or equal to MAXVERTEXATTRIBS.

Setting generic vertex attribute zero specifies a vertex; the four vertex coordi-nates are taken from the values of attribute zero. A Vertex2, Vertex3, or Vertex4

2.8. VERTEXARRAYS28

command is completely equivalent to the corresponding VertexAttrib* command with an index of zero. Setting any other generic vertex attribute updates the current values of the attribute. There are no current values for vertex attribute zero.

There is no aliasing among generic attributes and conventional attributes. In other words, an application can set all MAXVERTEXATTRIBSgeneric attributes and all conventional attributes without fear of one particular attribute overwriting the value of another attribute.

The state required to support vertex specification consists of four floating-point numbers per texture coordinate set to store the current texture coordinates s, t, r, and q, three floating-point numbers to store the three coordinates of the current normal, one floating-point number to store the current fog coordinate, four floating-point values to store the current RGBA color, four floating-point values to store the current RGBA secondary color, one floating-point value to store the current color index, and MAXVERTEXATTRIBS1 four-component floating-point vectors to store generic vertex attributes.

There is no notion of a current vertex, so no state is devoted to vertex coor-dinates or generic attribute zero. The initial texture coordinates are (s,t,r,q)=(0,0,0,1)for each texture coordinate set. The initial current normal has coor-dinates (0,0,1). The initial fog coordinate is zero. The initial RGBA color is (R,G,B,A)=(1,1,1,1)and the initial RGBA secondary color is (0,0,0,1). The initial color index is 1. The initial values for all generic vertex attributes are (0,0,0,1).

2.8 Vertex Arrays

The vertex specification commands described in section 2.7 accept data in almost any format, but their use requires many command executions to specify even sim-ple geometry. Vertex data may also be placed into arrays that are stored in the client’s address space. Blocks of data in these arrays may then be used to spec-ify multiple geometric primitives through the execution of a single GL command. The client may specify up to seven plus the values of MAXTEXTURECOORDSand MAXVERTEXATTRIBSarrays: one each to store vertex coordinates, normals, col-ors, secondary colors, color indices, edge flags, fog coordinates, two or more tex-ture coordinate sets, and one or more generic vertex attributes. The commands

voidVertexPointer( intsize,enumtype,sizeistride,void*pointer );voidNormalPointer( enumtype,sizeistride,void*pointer );

Version 3.0 -August 11, 2008

2.8. VERTEXARRAYS

voidColorPointer( intsize,enumtype,sizeistride,

void*pointer );

voidSecondaryColorPointer( intsize,enumtype,

sizeistride,void*pointer );

voidIndexPointer( enumtype,sizeistride,void*pointer );

voidEdgeFlagPointer( sizeistride,void*pointer );

voidFogCoordPointer( enumtype,sizeistride,

void*pointer );

voidTexCoordPointer( intsize,enumtype,sizeistride,

void*pointer );

voidVertexAttribPointer( uintindex,intsize,enumtype,

boolean normalized, sizei stride, const

void*pointer );

voidVertexAttribIPointer( uintindex,intsize,enumtype,sizeistride,constvoid*pointer );

describe the locations and organizations of these arrays. For each command, type specifies the data type of the values stored in the array. Because edge flags are al-ways type boolean, EdgeFlagPointer has no type argument. size, when present, indicates the number of values per vertex that are stored in the array. Because normals are always specified with three values, NormalPointer has no size argu-ment. Likewise, because color indices and edge flags are always specified with a single value, IndexPointer and EdgeFlagPointer also have no size argument. Table 2.4 indicates the allowable values for size and type (when present). For type the values BYTE, SHORT, INT, FLOAT, HALFFLOAT, and DOUBLEindicate types byte, short, int, float, half, and double, respectively; and the values UNSIGNEDBYTE, UNSIGNEDSHORT, and UNSIGNEDINTindicate types ubyte, ushort, and uint, respectively. The error INVALIDVALUEis generated if size is specified with a value other than that indicated in the table.

The index parameter in the VertexAttribPointer and VertexAttribI-Pointer commands identify the generic vertex attribute array being described. The error INVALIDVALUEis generated if index is greater than or equal to MAXVERTEXATTRIBS. Generic attribute arrays with integer type arguments can be handled in one of three ways: converted to float by normalizing to [0,1]or [1, 1] as specified in table 2.10, converted directly to float, or left as integers. Data for an array specified by VertexAttribPointer will be converted to floating-point by normalizing if normalized is TRUE, and converted directly to floating-point otherwise. Data for an array specified by VertexAttribIPointer will always be left as integer values; such data are referred to as pure integers.

2.8. VERTEXARRAYS

Command Sizes Integer Handling Types
VertexPointer 2,3,4 cast short, int, float, half, double
NormalPointer 3 normalize byte, short, int, float, half, double
ColorPointer 3,4 normalize byte, ubyte, short, ushort, int, uint, float, half, double
SecondaryColorPointer 3 normalize byte, ubyte, short, ushort, int, uint, float, half, double
IndexPointer 1 cast ubyte, short, int, float, double
FogCoordPointer 1 n/a float, half, double
TexCoordPointer 1,2,3,4 cast short, int, float, half, double
EdgeFlagPointer 1 integer boolean
VertexAttribPointer 1,2,3,4 flag byte, ubyte, short, ushort, int, uint, float, half, double
VertexAttribIPointer 1,2,3,4 integer byte, ubyte, short, ushort, int, uint

Table 2.4: Vertex array sizes (values per vertex) and data types. The “Integer Han-dling” column indicates how fixed-point data types are handled: “cast” means that they converted to floating-point directly, “normalize” means that they are converted to floating-point by normalizing to [0,1](for unsigned types) or [1,1](for signed types), “integer” means that they remain as integer values, and “flag” means that either “cast” or “normalized” applies, depending on the setting of the normalized flag in VertexAttribPointer.

2.8. VERTEXARRAYS

The one, two, three, or four values in an array that correspond to a single vertex comprise an array element. The values within each array element are stored se-quentially in memory. If stride is specified as zero, then array elements are stored sequentially as well. The error INVALIDVALUEis generated if stride is negative. Otherwise pointers to the ith and (i+1)st elements of an array differ by stride basic machine units (typically unsigned bytes), the pointer to the (i+1)st element being greater. For each command, pointer specifies the location in memory of the first value of the first element of the array being specified.

An individual array is enabled or disabled by calling one of

voidEnableClientState( enumarray );

voidDisableClientState( enumarray );

with array set to VERTEXARRAY, NORMALARRAY, COLORARRAY, SECONDARYCOLORARRAY, INDEXARRAY, EDGEFLAGARRAY, FOGCOORDARRAY, or TEXTURECOORDARRAY, for the vertex, normal, color, secondary color, color index, edge flag, fog coordinate, or texture coordinate array, respectively.

An individual generic vertex attribute array is enabled or disabled by calling one of

voidEnableVertexAttribArray( uintindex );

voidDisableVertexAttribArray( uintindex );

where index identifies the generic vertex attribute array to enable or disable. The error INVALIDVALUEis generated if index is greater than or equal to MAXVERTEXATTRIBS.

The command

voidClientActiveTexture( enumtexture );

is used to select the vertex array client state parameters to be modified by the TexCoordPointer command and the array affected by EnableClientState and DisableClientState with parameter TEXTURECOORDARRAY. This command sets the client state variable CLIENTACTIVETEXTURE. Each texture coordinate set has a client state vector which is selected when this command is invoked. This state vector includes the vertex array state. This call also selects the texture coor-dinate set state used for queries of client state.

Specifying an invalid texture generates the error INVALIDENUM. Valid values of texture are the same as for the MultiTexCoord commands described in sec-tion 2.7.

The command

2.8. VERTEXARRAYS32

voidArrayElement( inti );

transfers the ith element of every enabled array to the GL. The effect of ArrayElement(i) is the same as the effect of the command sequence

if(normal array enabled)Normal3[type]v(normal array element i); if(color array enabled)Color[size][type]v(color array element i); if(secondary color array enabled)SecondaryColor3[type]v(secondary color array element i); if(fog coordinate array enabled)FogCoord[type]v(fog coordinate array element i); for(j=0;j<textureUnits;j++){if(texture coordinate set jarray enabled)MultiTexCoord[size][type]v(TEXTURE0+ j, texture coordinate set jarray element i); if(color index array enabled)Index[type]v(color index array element i); if(edge flag array enabled)EdgeFlagv(edge flag array element i); for(j=1;j<genericAttributes;j++){if(generic vertex attribute jarray enabled){if(generic vertex attribute jarray is a pure integer array)VertexAttribI[size][type]v(j, generic vertex attribute jarray element i); elseif(generic vertex attribute jarray normalization flag is set, and typeis not FLOATor DOUBLE)VertexAttrib[size]N[type]v(j, generic vertex attribute jarray element i); elseVertexAttrib[size][type]v(j, generic vertex attribute j array element i);

}
}

if(generic vertex attribute array 0 enabled){if(generic vertex attribute 0 array is a pure integer array)VertexAttribI[size][type]v(0, generic vertex attribute 0 array element i); elseif(generic vertex attribute 0 array normalization flag is set, and typeis not FLOATor DOUBLE)VertexAttrib[size]N[type]v(0, generic vertex attribute 0 array element i); elseVertexAttrib[size][type]v(0, generic vertex attribute 0 array element i); } elseif(vertex array enabled){

Version 3.0 -August 11, 2008

2.8. VERTEXARRAYS

Vertex[size][type]v(vertex array element i);

}

where textureUnits and genericAttributes give the number of texture coordinate sets and generic vertex attributes supported by the implementation, respectively. ”[size]” and ”[type]” correspond to the size and type of the corresponding array. For generic vertex attributes, it is assumed that a complete set of vertex attribute commands exists, even though not all such functions are provided by the GL.

Changes made to array data between the execution of Begin and the corre-sponding execution of End may affect calls to ArrayElement that are made within the same Begin/End period in non-sequential ways. That is, a call to ArrayEle-ment that precedes a change to array data may access the changed data, and a call that follows a change to array data may access original data.

Specifying i<0results in undefined behavior. Generating the error INVALIDVALUEis recommended in this case.

The command

voidDrawArrays( enummode,intfirst,sizeicount );

constructs a sequence of geometric primitives using elements firstthrough first+count1of each enabled array. mode specifies what kind of primi-tives are constructed; it accepts the same token values as the modeparameter of the Begin command. The effect of

DrawArrays (mode,first,count);

is the same as the effect of the command sequence

if(modeor countis invalid )generate appropriate error

else {
Begin(mode);for(inti=0;i<count;i++)

ArrayElement(first+i);End();

}with one exception: the current normal coordinates, color, secondary color, color index, edge flag, fog coordinate, texture coordinates, and generic attributes are each indeterminate after execution of DrawArrays, if the corresponding array is enabled. Current values corresponding to disabled arrays are not modified by the execution of DrawArrays.

Version 3.0 -August 11, 2008

2.8. VERTEXARRAYS

Specifying first<0results in undefined behavior. Generating the error INVALIDVALUEis recommended in this case.

The command

voidMultiDrawArrays( enummode,int*first,sizei*count,sizeiprimcount );

behaves identically to DrawArrays except that primcount separate ranges of elements are specified instead. It has the same effect as:

for (i = 0; i < primcount; i++) {
if(count[i]>0)DrawArrays(mode,first[i],count[i]);

}

The command

voidDrawElements( enummode,sizeicount,enumtype,void*indices );

constructs a sequence of geometric primitives using the count elements whose indices are stored in indices. type must be one of UNSIGNEDBYTE, UNSIGNEDSHORT, or UNSIGNEDINT, indicating that the values in indices are in-dices of GL type ubyte, ushort, or uintrespectively. mode specifies what kind of primitives are constructed; it accepts the same token values as the modeparameter of the Begin command. The effect of

DrawElements (mode,count,type,indices);

is the same as the effect of the command sequence

if(mode,count,or typeis invalid )generate appropriate error

else {
Begin(mode);for(inti=0;i<count;i++)

ArrayElement(indices[i]);End();

}

Version 3.0 -August 11, 2008

2.8. VERTEXARRAYS

with one exception: the current normal coordinates, color, secondary color, color index, edge flag, fog coordinate, texture coordinates, and generic attributes are each indeterminate after the execution of DrawElements, if the corresponding array is enabled. Current values corresponding to disabled arrays are not modified by the execution of DrawElements.

The command

voidMultiDrawElements( enummode,sizei*count,enumtype,void**indices,sizeiprimcount );

behaves identically to DrawElements except that primcount separate lists of elements are specified instead. It has the same effect as:

for (i = 0; i < primcount; i++) {if(count[i])>0)DrawElements(mode,count[i],type,indices[i]);

}

The command

voidDrawRangeElements( enummode,uintstart,uintend,sizeicount,enumtype,void*indices );

is a restricted form of DrawElements. mode, count, type, and indices match the corresponding arguments to DrawElements, with the additional constraint that all values in the array indices must lie between start and end inclusive.

Implementations denote recommended maximum amounts of vertex and index data, which may be queried by calling GetIntegerv with the symbolic constants MAXELEMENTSVERTICESand MAXELEMENTSINDICES. If endstart+1is greater than the value of MAXELEMENTSVERTICES, or if count is greater than the value of MAXELEMENTSINDICES, then the call may operate at reduced per-formance. There is no requirement that all vertices in the range [start,end]be referenced. However, the implementation may partially process unused vertices, reducing performance from what could be achieved with an optimal index set.

The error INVALIDVALUEis generated if end<start. Invalid mode, count, or type parameters generate the same errors as would the corresponding call to DrawElements. It is an error for indices to lie outside the range [start,end], but implementations may not check for this. Such indices will cause implementation-dependent behavior.

The command

2.8.VERTEXARRAYS36
voidInterleavedArrays( enumformvoid*pointer );at, sizei stride,
efficiently initializes the six figurations. format must arrays and be one of their 14 enables to symbolic one of 14 constants: con-V2F,

V3F, C4UBV2F, C4UBV3F, C3FV3F, N3FV3F, C4FN3FV3F, T2FV3F, T4FV4F, T2FC4UBV3F, T2FC3FV3F, T2FN3FV3F, T2FC4FN3FV3F, or T4FC4FN3FV4F.

The effect of

InterleavedArrays(format,stride,pointer);

is the same as the effect of the command sequence

if(formator strideis invalid)generate appropriate error

else {int str;

set et,ec,en,st,sc,sv,tc,pc,pn,pv,and sas a function

of table 2.5 and the value of format. str= stride; if(stris zero)

str=s; DisableClientState(EDGEFLAGARRAY); DisableClientState(INDEXARRAY); DisableClientState(SECONDARYCOLORARRAY); DisableClientState(FOGCOORDARRAY); if(et){

EnableClientState(TEXTURECOORDARRAY); TexCoordPointer(st, FLOAT, str, pointer); } elseDisableClientState(TEXTURECOORDARRAY);

if(ec){EnableClientState(COLORARRAY); ColorPointer(sc, tc, str, pointer+pc); else

} DisableClientState(COLORARRAY);

if(en){EnableClientState(NORMALARRAY); NormalPointer(FLOAT, str, pointer+pn);

} else

2.8. VERTEXARRAYS

format et ec en st sc sv tc
V2F False False False 2
V3F False False False 3
C4UB V2F C4UB V3F False False True True False False 4 4 2 3 UNSIGNED BYTE UNSIGNED BYTE
C3F V3F N3F V3F C4F N3F V3F T2F V3F False False False True True False True False False True True False 2 3 4 3 3 3 3 FLOAT FLOAT
T4F V4F T2F C4UB V3F T2F C3F V3F T2F N3F V3F True True True True False True True False False False False True 4 2 2 2 4 3 4 3 3 3 UNSIGNED BYTE FLOAT
T2F C4F N3F V3F T4F C4F N3F V4F True True True True True True 2 4 4 4 3 4 FLOAT FLOAT
format pc pn pv s
V2F V3F C4UB V2F C4UB V3F 0 0 0 0 c c 2f 3f c + 2f c + 3f
C3F V3F N3F V3F C4F N3F V3F T2F V3F 0 0 0 4f3f 3f 7f 2f 6f 6f 10f 5f
T4F V4F T2F C4UB V3F T2F C3F V3F T2F N3F V3F 2f 2f 2f 4f c + 2f 5f 5f 8f c + 5f 8f 8f
T2F C4F N3F V3F T4F C4F N3F V4F 2f 4f 6f 8f 9f 11f 12f 15f

Table 2.5: Variables that direct the execution of InterleavedArrays. fis sizeof(FLOAT). cis 4 times sizeof(UNSIGNEDBYTE), rounded up to the nearest multiple of f. All pointer arithmetic is performed in units of sizeof(UNSIGNEDBYTE).

2.9. BUFFER OBJECTS 38

DisableClientState(NORMALARRAY); EnableClientState(VERTEXARRAY); VertexPointer(sv, FLOAT, str, pointer+pv);

}

If the number of supported texture units (the value of MAXTEXTURECOORDS) is mand the number of supported generic vertex attributes (the value of MAXVERTEXATTRIBS) is n, then the client state required to implement vertex arrays consists of an integer for the client active texture unit selector, 7+m+nboolean values, 7+m+nmemory pointers, 7+m+ninteger stride values, 7+m+nsymbolic constants representing array types, 3+m+nintegers rep-resenting values per element, nboolean values indicating normalization, and nboolean values indicating whether the attribute values are pure integers.

In the initial state, the client active texture unit selector is TEXTURE0, the boolean values are each false, the memory pointers are each NULL, the strides are each zero, the array types are each FLOAT, the integers representing values per element are each four, and the normalized and pure integer flags are each false.

2.9 Buffer Objects

The vertex data arrays described in section 2.8 are stored in client memory. It is sometimes desirable to store frequently used client data, such as vertex array and pixel data, in high-performance server memory. GL buffer objects provide a mechanism that clients can use to allocate, initialize, and render from such memory.

The name space for buffer objects is the unsigned integers, with zero reserved for the GL. A buffer object is created by binding an unused name to a buffer target. The binding is effected by calling

voidBindBuffer( enumtarget,uintbuffer );

target must be one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER. The ARRAYBUFFERtarget is discussed in section 2.9.1. The ELEMENTARRAYBUFFERtarget is discussed in section 2.9.2. The PIXELUNPACKBUFFERand PIXELPACKBUFFERtargets are discussed later in sections 3.7, 4.3.2, and 6.1. If the buffer object named buffer has not been previously bound or has been deleted since the last binding, the GL cre-ates a new state vector, initialized with a zero-sized memory buffer and comprising the state values listed in table 2.6.

BindBuffer may also be used to bind an existing buffer object. If the bind is successful no change is made to the state of the newly bound buffer object, and any previous binding to target is broken.

2.9. BUFFER OBJECTS

39

Name Type Initial Value Legal Values
BUFFER SIZE integer 0 any non-negative integer
BUFFER USAGE enum STATIC DRAW STREAMDRAW, STREAMREAD, STREAMCOPY, STATICDRAW, STATICREAD, STATICCOPY, DYNAMICDRAW, DYNAMICREAD, DYNAMICCOPY
BUFFER ACCESS enum READ WRITE READONLY, WRITEONLY, READWRITE
BUFFER MAPPED boolean FALSE TRUE, FALSE
BUFFER MAP POINTER void* NULL address

Table 2.6: Buffer object parameters and their values.

While a buffer object is bound, GL operations on the target to which it is bound affect the bound buffer object, and queries of the target to which a buffer object is bound return state from the bound object.

Initially, each buffer object target is bound to zero. There is no buffer object corresponding to the name zero, so client attempts to modify or query buffer object state for a target bound to zero generate an INVALIDOPERATIONerror.

Buffer objects are deleted by calling

voidDeleteBuffers( sizein,constuint*buffers );

buffers contains n names of buffer objects to be deleted. After a buffer object is deleted it has no contents, and its name is again unused. Unused names in buffers are silently ignored, as is the value zero.

The command

voidGenBuffers( sizein,uint*buffers );

returns n previously unused buffer object names in buffers. These names are marked as used, for the purposes of GenBuffers only, but they acquire buffer state only when they are first bound, just as if they were unused.

While a buffer object is bound, any GL operations on that object affect any other bindings of that object. If a buffer object is deleted while it is bound, all bindings to that object in the current context (i.e. in the thread that called Delete-Buffers) are reset to zero. Bindings to that buffer in other contexts and other threads are not affected, but attempting to use a deleted buffer in another thread

2.9. BUFFER OBJECTS

produces undefined results, including but not limited to possible GL errors and rendering corruption. Using a deleted buffer in another context or thread may not, however, result in program termination.

The data store of a buffer object is created and initialized by calling

voidBufferData( enumtarget,sizeiptrsize,constvoid*data,enumusage );

with target set to one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER, size set to the size of the data store in basic machine units, and data pointing to the source data in client memory. If data is non-null, then the source data is copied to the buffer object’s data store. If data is null, then the contents of the buffer object’s data store are undefined.

usage is specified as one of nine enumerated values, indicating the expected application usage pattern of the data store. The values are:

STREAMDRAWThe data store contents will be specified once by the application, and used at most a few times as the source for GL drawing and image speci-fication commands.

STREAM READ The data store contents will be specified once by reading data from the GL, and queried at most a few times by the application.

STREAM COPY The data store contents will be specified once by reading data from the GL, and used at most a few times as the source for GL drawing and image specification commands.

STATIC DRAW The data store contents will be specified once by the application, and used many times as the source for GL drawing and image specification commands.

STATIC READ The data store contents will be specified once by reading data from the GL, and queried many times by the application.

STATICCOPYThe data store contents will be specified once by reading data from the GL, and used many times as the source for GL drawing and image spec-ification commands.

DYNAMICDRAWThe data store contents will be respecified repeatedly by the ap-plication, and used many times as the source for GL drawing and image specification commands.

2.9. BUFFER OBJECTS

Name Value
BUFFER SIZE size
BUFFER USAGE usage
BUFFER ACCESS READ WRITE
BUFFER MAPPED FALSE
BUFFER MAP POINTER NULL

Table 2.7: Buffer object initial state.

DYNAMIC READ The data store contents will be respecified repeatedly by reading data from the GL, and queried many times by the application.

DYNAMIC COPY The data store contents will be respecified repeatedly by reading

data from the GL, and used many times as the source for GL drawing and

image specification commands.

usage is provided as a performance hint only. The specified usage value does not constrain the actual usage pattern of the data store.

BufferData deletes any existing data store, and sets the values of the buffer object’s state variables as shown in table 2.7.

Clients must align data elements consistent with the requirements of the client platform, with an additional base-level requirement that an offset within a buffer to a datum comprising Nbasic machine units be a multiple of N.

If the GL is unable to create a data store of the requested size, the error OUTOFMEMORYis generated.

To modify some or all of the data contained in a buffer object’s data store, the client may use the command

voidBufferSubData( enumtarget,intptroffset,sizeiptrsize,constvoid*data );

with target set to ARRAYBUFFER. offset and size indicate the range of data in the buffer object that is to be replaced, in terms of basic machine units. data specifies a region of client memory size basic machine units in length, containing the data that replace the specified buffer range. An INVALIDVALUEerror is generated if offset or size is less than zero, or if offset +size is greater than the value of BUFFERSIZE.

The entire data store of a buffer object can be mapped into the client’s address space by calling

void*MapBuffer( enumtarget,enumaccess );

2.9. BUFFER OBJECTS

Name Value
BUFFER ACCESS access
BUFFER MAPPED TRUE
BUFFER MAP POINTER pointer to the data store

Table 2.8: Buffer object state set by MapBuffer.

with target set to one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER. If the GL is able to map the buffer object’s data store into the client’s address space, MapBuffer returns the pointer value to the data store once all pending operations on that buffer have completed. If the buffer data store is already in the mapped state, MapBuffer returns NULL, and an INVALIDOPERATIONerror is generated. Otherwise MapBuffer returns NULL, and the error OUTOFMEMORYis generated. access is specified as one of READONLY, WRITEONLY, or READWRITE, indicating the operations that the client may perform on the data store through the pointer while the data store is mapped.

MapBuffer sets buffer object state values as shown in table 2.8.

Non-NULLpointers returned by MapBuffer may be used by the client to mod-ify and query buffer object data, consistent with the access rules of the mapping, while the mapping remains valid. No GL error is generated if the pointer is used to attempt to modify a READONLYdata store, or to attempt to read from a WRITEONLYdata store, but operation may be slow and system errors (possibly in-cluding program termination) may result. Pointer values returned by MapBuffer may not be passed as parameter values to GL commands. For example, they may not be used to specify array pointers, or to specify or query pixel or texture image data; such actions produce undefined results, although implementations may not check for such behavior for performance reasons.

Calling BufferSubData to modify the data store of a mapped buffer will gen-erate an INVALIDOPERATIONerror.

Mappings to the data stores of buffer objects may have nonstandard perfor-mance characteristics. For example, such mappings may be marked as uncacheable regions of memory, and in such cases reading from them may be very slow. To ensure optimal performance, the client should use the mapping in a fashion consis-tent with the values of BUFFERUSAGEand BUFFERACCESS. Using a mapping in a fashion inconsistent with these values is liable to be multiple orders of magnitude slower than using normal memory.

After the client has specified the contents of a mapped data store, and before

2.9. BUFFER OBJECTS

the data in that store are dereferenced by any GL commands, the mapping must be relinquished by calling

booleanUnmapBuffer( enumtarget );

with target set to one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER. Unmapping a mapped buffer object invalidates the pointers to its data store and sets the object’s BUFFERMAPPEDstate to FALSEand its BUFFERMAPPOINTERstate to NULL.

UnmapBuffer returns TRUEunless data values in the buffer’s data store have become corrupted during the period that the buffer was mapped. Such corruption can be the result of a screen resolution change or other window system-dependent event that causes system heaps such as those for high-performance graphics mem-ory to be discarded. GL implementations must guarantee that such corruption can occur only during the periods that a buffer’s data store is mapped. If such corrup-tion has occurred, UnmapBuffer returns FALSE, and the contents of the buffer’s data store become undefined.

If the buffer data store is already in the unmapped state, UnmapBuffer returns FALSE, and an INVALIDOPERATIONerror is generated. However, unmapping that occurs as a side effect of buffer deletion or reinitialization is not an error.

All or part of the data store of a buffer object may be mapped into the client’s address space by calling

void*MapBufferRange( enumtarget,intptroffset,sizeiptrlength,bitfieldaccess );

with target set to one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER. offset and length indi-cate the range of data in the buffer object that is to be mapped, in terms of basic machine units. access is a bitfield containing flags which describe the requested mapping. These flags are described below.

If no error occurs, a pointer to the beginning of the mapped range is returned and may be used to modify and/or query the corresponding range of the buffer, according to the following flag bits set in access:

    • MAPREADBITindicates that the returned pointer may be used to read buffer object data. No GL error is generated if the pointer is used to query a map-ping which excludes this flag, but the result is undefined and system errors (possibly including program termination) may occur.

    • 2.9. BUFFER OBJECTS
  • MAP WRITE BIT indicates that the returned pointer may be used to modify buffer object data. No GL error is generated if the pointer is used to modify a mapping which excludes this flag, but the result is undefined and system errors (possibly including program termination) may occur.

The following optional flag bits in access may be used to modify the mapping:

  • MAPINVALIDATERANGEBITindicates that the previous contents of the specified range may be discarded. Data within this range are undefined with the exception of subsequently written data. No GL error is generated if sub-sequent GL operations access unwritten data, but the result is undefined and system errors (possibly including program termination) may occur. This flag may not be used in combination with MAPREADBIT.

  • MAPINVALIDATEBUFFERBITindicates that the previous contents of the entire buffer may be discarded. Data within the entire buffer are undefined with the exception of subsequently written data. No GL error is generated if subsequent GL operations access unwritten data, but the result is undefined and system errors (possibly including program termination) may occur. This flag may not be used in combination with MAPREADBIT.

  • MAPFLUSHEXPLICITBITindicates that one or more discrete subranges of the mapping may be modified. When this flag is set, modifications to each subrange must be explicitly flushed by calling FlushMappedBuffer-Range. No GL error is set if a subrange of the mapping is modified and not flushed, but data within the corresponding subrange of the buffer are un-defined. This flag may only be used in conjunction with MAPWRITEBIT. When this option is selected, flushing is strictly limited to regions that are explicitly indicated with calls to FlushMappedBufferRange prior to un-map; if this option is not selected UnmapBuffer will automatically flush the entire mapped range when called.

  • MAPUNSYNCHRONIZEDBITindicates that the GL should not attempt to synchronize pending operations on the buffer prior to returning from Map-BufferRange. No GL error is generated if pending operations which source or modify the buffer overlap the mapped region, but the result of such previ-ous and any subsequent operations is undefined.

Errors If an error occurs, MapBufferRange returns a NULLpointer.

2.9. BUFFER OBJECTS

An INVALIDVALUEerror is generated if offset or length is negative, if offset+lengthis greater than the value of BUFFERSIZE, or if access has any bits set other than those defined above.

An INVALIDOPERATIONerror is generated for any of the following condi-tions:

The buffer is already in a mapped state.

Neither MAPREADBITnor MAPWRITEBITis set.

MAPREADBITis set and any of MAPINVALIDATERANGEBIT, MAPINVALIDATEBUFFERBIT, or MAPUNSYNCHRONIZEDBITis set.

MAPFLUSHEXPLICITBITis set and MAPWRITEBITis not set.

No error is generated if memory outside the mapped range is modified or queried, but the result is undefined and system errors (possibly including program termination) may occur.

If a buffer is mapped with the MAPFLUSHEXPLICITBITflag, modifications to the mapped range may be indicated by calling

voidFlushMappedBufferRange( enumtarget,intptroffset,sizeiptrlength );

with target set to one of ARRAYBUFFER, ELEMENTARRAYBUFFER, PIXELUNPACKBUFFER, or PIXELPACKBUFFER. offset and length indi-cate a modified subrange of the mapping, in basic machine units. The specified subrange to flush is relative to the start of the currently mapped range of buffer. FlushMappedBufferRange may be called multiple times to indicate distinct subranges of the mapping which require flushing.

Errors

An INVALIDVALUEerror is generated if offset or length is negative, or if offset+lengthexceeds the size of the mapping.

An INVALIDOPERATIONerror is generated if zero is bound to target.

An INVALIDOPERATIONerror is generated if buffer is not mapped, or is mapped without the MAPFLUSHEXPLICITBITflag.

2.9.1 Vertex Arrays in Buffer Objects

Blocks of vertex array data may be stored in buffer objects with the same format and layout options supported for client-side vertex arrays. However, it is expected

2.9. BUFFER OBJECTS

that GL implementations will (at minimum) be optimized for data with all compo-nents represented as floats, as well as for color data with components represented as either floats or unsigned bytes.

A buffer object binding point is added to the client state associated with each vertex array type. The commands that specify the locations and or-ganizations of vertex arrays copy the buffer object name that is bound to ARRAYBUFFERto the binding point corresponding to the vertex array of the type being specified. For example, the NormalPointer command copies the value of ARRAYBUFFERBINDING(the queriable name of the buffer bind-ing corresponding to the target ARRAYBUFFER) to the client state variable NORMALARRAYBUFFERBINDING.

Rendering commands ArrayElement, DrawArrays, DrawElements, DrawRangeElements, MultiDrawArrays, and MultiDrawElements operate as previously defined, except that data for enabled vertex and attrib arrays are sourced from buffers if the array’s buffer binding is non-zero. When an array is sourced from a buffer object, the pointer value of that array is used to compute an offset, in basic machine units, into the data store of the buffer object. This offset is computed by subtracting a null pointer from the pointer value, where both pointers are treated as pointers to basic machine units.

It is acceptable for vertex or attrib arrays to be sourced from any combination of client memory and various buffer objects during a single rendering operation.

Any GL command that attempts to read data from a buffer object will fail and generate an INVALIDOPERATIONerror if the object is mapped at the time the command is issued.

2.9.2 Array Indices in Buffer Objects

Blocks of array indices may be stored in buffer objects with the same format op-tions that are supported for client-side index arrays. Initially zero is bound to ELEMENTARRAYBUFFER, indicating that DrawElements and DrawRangeEle-ments are to source their indices from arrays passed as their indices parameters, and that MultiDrawElements is to source its indices from the array of pointers to arrays passed in as its indices parameter.

A buffer object is bound to ELEMENTARRAYBUFFERby calling BindBuffer with target set to ELEMENTARRAYBUFFER, and buffer set to the name of the buffer object. If no corresponding buffer object exists, one is initialized as defined in section 2.9.

While a non-zero buffer object name is bound to ELEMENTARRAYBUFFER, DrawElements and DrawRangeElements source their indices from that buffer object, using their indices parameters as offsets into the buffer object in the same

2.10. VERTEX ARRAY OBJECTS

fashion as described in section 2.9.1. MultiDrawElements also sources its in-dices from that buffer object, using its indices parameter as a pointer to an array of pointers that represent offsets into the buffer object.

Buffer objects created by binding an unused name to ARRAYBUFFERand to ELEMENTARRAYBUFFERare formally equivalent, but the GL may make different choices about storage implementation based on the initial binding. In some cases performance will be optimized by storing indices and array data in separate buffer objects, and by creating those buffer objects with the corresponding binding points.

2.9.3 Buffer Object State

The state required to support buffer objects consists of binding names for the array buffer, element buffer, pixel unpack buffer, and pixel pack buffer. Additionally, each vertex array has an associated binding so there is a buffer object binding for each of the vertex array, normal array, color array, index array, multiple texture coordinate arrays, edge flag array, secondary color array, fog coordinate array, and vertex attribute arrays. The initial values for all buffer object bindings is zero.

The state of each buffer object consists of a buffer size in basic machine units, a usage parameter, an access parameter, a mapped boolean, a pointer to the mapped buffer (NULLif unmapped), and the sized array of basic machine units for the buffer data.

2.10 Vertex Array Objects

The buffer objects that are to be used by the vertex stage of the GL are collected together to form a vertex array object. All state related to the definition of data used by the vertex processor is encapsulated in a vertex array object.

The command

voidGenVertexArrays( sizein,uint*arrays );

returns n previous unused vertex array object names in arrays. These names are marked as used, for the purposes of GenVertexArrays only, and are initialized with the state listed in tables 6.6 through 6.9.

Vertex array objects are deleted by calling

voidDeleteVertexArrays( sizein,constuint*arrays );

arrays contains n names of vertex array objects to be deleted. Once a vertex array object is deleted it has no contents and its name is again unused. If a vertex array

2.11. RECTANGLES

object that is currently bound is deleted, the binding for that object reverts to zero and the default vertex array becomes current. Unused names in arrays are silently ignored, as is the value zero.

A vertex array object is created by binding a name returned by GenVertexAr-rays with the command

voidBindVertexArray( uintarray );

array is the vertex array object name. The resulting vertex array object is a new state vector, comprising all the state values listed in tables 6.6 through 6.9.

BindVertexArray may also be used to bind an existing vertex array object. If the bind is successful no change is made to the state of the bound vertex array object, and any previous binding is broken.

The currently bound vertex array object is used for all commands which modify vertex array state, such as VertexAttribPointer and EnableVertexAttribArray; all commands which draw from vertex arrays, such as DrawArrays and DrawEle-ments; and all queries of vertex array state (see chapter 6).

BindVertexArray fails and an INVALIDOPERATIONerror is generated if ar-ray is not a name returned from a previous call to GenVertexArrays, or if such a name has since been deleted with DeleteVertexArrays.

An INVALIDOPERATIONerror is generated if VertexAttribPointer or Ver-texAttribIPointer is called while a non-zero vertex array object is bound and zero is bound to the ARRAYBUFFERbuffer object binding point 2 .

2.11 Rectangles

There is a set of GL commands to support efficient specification of rectangles as two corner vertices.

void Rect{sifd}( Tx1,Ty1,Tx2,Ty2 );

void Rect{sifd}v( Tv1[2],Tv2[2] );

Each command takes either four arguments organized as two consecutive pairs of (x,y)coordinates, or two pointers to arrays each of which contains an xvalue followed by a yvalue. The effect of the Rect command

Rect (x1,y1,x2,y2);

is exactly the same as the following sequence of commands:

2This error makes it impossible to create a vertex array object containing client array pointers.

Version 3.0 -August 11, 2008

2.12. COORDINATETRANSFORMATIONS

Begin(POLYGON);

Vertex2(x1,y1);

Vertex2(x2,y1);

Vertex2(x2,y2);

Vertex2(x1,y2);

End();

The appropriate Vertex2 command would be invoked depending on which of the Rect commands is issued.

2.12 Coordinate Transformations

This section and the following discussion through section 2.19 describe the state values and operations necessary for transforming vertex attributes according to a fixed-functionality method. An alternate programmable method for transforming vertex attributes is described in section 2.20.

Vertices, normals, and texture coordinates are transformed before their coordi-nates are used to produce an image in the framebuffer. We begin with a description of how vertex coordinates are transformed and how this transformation is con-trolled.

Figure 2.6 diagrams the sequence of transformations that are applied to ver-tices. The vertex coordinates that are presented to the GL are termed object co-ordinates. The model-view matrix is applied to these coordinates to yield eye co-ordinates. Then another matrix, called the projection matrix, is applied to eye coordinates to yield clip coordinates. A perspective division is carried out on clip coordinates to yield normalized device coordinates. A final viewport transforma-tion is applied to convert these coordinates into window coordinates.

Object coordinates, eye coordinates, and clip coordinates are four-dimensional, consisting of x, y, z, and wcoordinates (in that order). The model-view and pro-

jection matrices are thus 4 × 4.



If a vertex in object coordinates is given by

⎜⎜⎝

xo yo zo wo

⎟⎟⎠

and the model-view matrix

is M, then the vertex’s eye coordinates are found as

⎞⎛

⎞⎛

xe xo

⎜⎜⎝

ye

ze

⎟⎟⎠

= M

⎜⎜⎝

yo

zo

⎟⎟⎠

.

we wo

Version 3.0 -August 11, 2008

2.12. COORDINATETRANSFORMATIONS

Similarly, if Pis the projection matrix, then the vertex’s clip coordinates are

⎛⎞ ⎛⎞

xc xe
yc ⎟⎜ye

⎜⎟ = P ⎜⎟ .

zc ⎠⎝ze

wc we

The vertex’s normalized device coordinates are then

⎛⎞ ⎛⎞

xc

xdwc yd= P wyec .

zdze

wc

2.12.1 Controlling the Viewport

The viewport transformation is determined by the viewport’s width and height in pixels, pxand py, respectively, and its center (ox,oy)(also in pixels). The vertex’s

⎛⎞

xw window coordinates, yw ,are given by zw

⎛⎞⎛ ⎞

xwp2 x xd + ox ⎝yw = p2 y yd + oy .

zwf2 n zd + n+2 f

2.12. COORDINATETRANSFORMATIONS

The factor and offset applied to zdencoded by nand fare set using

voidDepthRange( clampdn,clampdf );

zwis represented as either fixed-or floating-point depending on whether the frame-buffer’s depth buffer uses a fixed-or floating-point representation. If the depth buffer uses fixed-point, we assume that it represents each value k/(2m 1), where k∈{0,1,...,2m 1}, as k(e.g. 1.0 is represented in binary as a string of all ones). The parameters n and f are clamped to the range [0,1], as are all arguments of type clampdor clampf.

Viewport transformation parameters are specified using

voidViewport( intx,inty,sizeiw,sizeih );

where x and y give the xand ywindow coordinates of the viewport’s lower left corner and w and h give the viewport’s width and height, respectively. The viewport parameters shown in the above equations are found from these values as ox=x+w/2and oy=y+h/2; px=w, py=h.

Viewport width and height are clamped to implementation-dependent maxi-mums when specified. The maximum width and height may be found by issuing an appropriate Get command (see chapter 6). The maximum viewport dimensions must be greater than or equal to the visible dimensions of the display being ren-dered to. INVALIDVALUEis generated if either w or h is negative.

The state required to implement the viewport transformation is four integers and two clamped floating-point values. In the initial state, w and h are set to the width and height, respectively, of the window into which the GL is to do its ren-dering. oxand oyare set to w/2and h/2, respectively. nand fare set to 0.0and 1.0, respectively.

2.12.2 Matrices

The projection matrix and model-view matrix are set and modified with a variety of commands. The affected matrix is determined by the current matrix mode. The current matrix mode is set with

voidMatrixMode( enummode );

which takes one of the pre-defined constants TEXTURE, MODELVIEW, COLOR, or PROJECTIONas the argument value. TEXTUREis described later in section 2.12.2, and COLORis described in section 3.7.3. If the current matrix mode is MODELVIEW, then matrix operations apply to the model-view matrix; if PROJECTION, then they apply to the projection matrix.

The two basic commands for affecting the current matrix are

Version 3.0 -August 11, 2008

2.12. COORDINATETRANSFORMATIONS

void LoadMatrix{fd}( Tm[16] );voidMultMatrix{fd}( Tm[16] );

LoadMatrix takes a pointer to a 4× 4 matrix stored in column-major order as 16 consecutive floating-point values, i.e. as

⎜⎜⎝

a1a5a9a13a2a6a10a14

a3a7a11a15a4a8a12a16

⎟⎟⎠

.

(This differs from the standard row-major Cordering for matrix elements. If the standard ordering is used, all of the subsequent transformation equations are trans-posed, and the columns representing vectors become rows.)

The specified matrix replaces the current matrix with the one pointed to. Mult-Matrix takes the same type argument as LoadMatrix, but multiplies the current matrix by the one pointed to and replaces the current matrix with the product. If Cis the current matrix and Mis the matrix pointed to by MultMatrix’s argument, then the resulting current matrix, C, is

C� = C M.

·

The commands

void LoadTransposeMatrix{fd}( Tm[16] );

void MultTransposeMatrix{fd}( Tm[16] );

take pointers to 4×4 matrices stored in row-major order as 16 consecutive floating-point values, i.e. as

⎜⎜⎝

a1 a2 a3 a4 a5 a6 a7 a8

a9a10a11a12a13a14a15a16

⎟⎟⎠

.

The effect of

LoadTransposeMatrix[fd](m);

is the same as the effect of

LoadMatrix[fd](mT );

The effect of

2.12. COORDINATETRANSFORMATIONS

MultTransposeMatrix[fd](m);

is the same as the effect of

MultMatrix[fd](mT );

The command voidLoadIdentity( void);effectively calls LoadMatrix with the identity matrix:

⎜⎜⎝

1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

⎟⎟⎠

.

There are a variety of other commands that manipulate matrices. Rotate, Translate, Scale, Frustum, and Ortho manipulate the current matrix. Each com-putes a matrix and then invokes MultMatrix with this matrix. In the case of

void Rotate{fd}( Tθ,Tx,Ty,Tz );

θgives an angle of rotation in degrees; the coordinates of a vector vare given by v=(xyz)T . The computed matrix is a counter-clockwise rotation about the line through the origin with the specified axis when that axis is pointing up (i.e. the right-hand rule determines the sense of the rotation angle). The matrix is thus

0
R 0
0
0 0 0 1

⎜⎜⎝

⎟⎟⎠

.

T

Let u=v/||v|| =

x� y� z�

. If

0 z� y� z� 0 x� 0



S =

yx

then

R = uuT +cosθ(IuuT ) + sin θS.

The arguments to

2.12. COORDINATETRANSFORMATIONS

void Translate{fd}( Tx,Ty,Tz );

give the coordinates of a translation vector as (xyz)T . The resulting matrix is a translation by the specified vector:

⎜⎜⎝

1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1

⎟⎟⎠

.

void Scale{fd}( Tx,Ty,Tz );

produces a general scaling along the x-, y-, and z-axes. The corresponding matrix

is

⎜⎜⎝

x 0 0 0
0 y 0 0
0 0 z 0
0 0 0 1

⎟⎟⎠

.

For

voidFrustum( doublel,doubler,doubleb,doublet,doublen,doublef );

the coordinates (lbn)T and (rtn)T specify the points on the near clipping plane that are mapped to the lower left and upper right corners of the window, respectively (assuming that the eye is located at (000)T ). fgives the distance from the eye to the far clipping plane. If either nor fis less than or equal to zero, lis equal to r, bis equal to t, or nis equal to f, the error INVALIDVALUEresults. The corresponding matrix is

2nr+l

00

rlrl

0 2nt+b0tbtb 00 f+n

⎜⎜⎜⎝

⎟⎟⎟⎠

.

2fn


f1 n fn

0

00

voidOrtho( doublel,doubler,doubleb,doublet,doublen,doublef );

describes a matrix that produces parallel projection. (lbn)T and (rtn)T specify the points on the near clipping plane that are mapped to the lower left and upper right corners of the window, respectively. f gives the distance from the eye

2.12. COORDINATETRANSFORMATIONS

to the far clipping plane. If lis equal to r, bis equal to t, or nis equal to f, the error INVALIDVALUEresults. The corresponding matrix is

2r+l

rl 00 rl 0 2 0 t+btb tb

00 2 f+n

⎜⎜⎜⎝

⎟⎟⎟⎠

.

fn

0

fn

1

00

For each texture coordinate set, a 4 × 4 matrix is applied to the corresponding texture coordinates. This matrix is applied as

⎞⎛⎞

m1m5m9m13s

⎜⎜⎝

m2m6m10m14

m3m7m11m15

⎜⎜⎝⎟⎟⎠

t

r

⎟⎟⎠

,

m4m8m12m16q

where the left matrix is the current texture matrix. The matrix is applied to the coordinates resulting from texture coordinate generation (which may simply be the current texture coordinates), and the resulting transformed coordinates become the texture coordinates associated with a vertex. Setting the matrix mode to TEXTUREcauses the already described matrix operations to apply to the texture matrix.

The command

voidActiveTexture( enumtexture );

specifies the active texture unit selector, ACTIVETEXTURE. Each texture unit con-tains up to two distinct sub-units: a texture coordinate processing unit (consisting of a texture matrix stack and texture coordinate generation state) and a texture image unit (consisting of all the texture state defined in section 3.9). In implemen- tations with a different number of supported texture coordinate sets and texture image units, some texture units may consist of only one of the two sub-units.

The active texture unit selector specifies the texture coordinate set accessed by commands involving texture coordinate processing. Such commands include those accessing the current matrix stack (if MATRIXMODEis TEXTURE), TexEnv commands controlling point sprite coordinate replacement (see section 3.4), Tex-Gen (section 2.12.4), Enable/Disable (if any texture coordinate generation enum is selected), as well as queries of the current texture coordinates and current raster texture coordinates. If the texture coordinate set number corresponding to the cur-rent value of ACTIVETEXTUREis greater than or equal to the implementation-dependent constant MAXTEXTURECOORDS, the error INVALIDOPERATIONis generated by any such command.

2.12. COORDINATETRANSFORMATIONS

The active texture unit selector also selects the texture image unit accessed by commands involving texture image processing (section 3.9). Such commands include all variants of TexEnv (except for those controlling point sprite coordi-nate replacement), TexParameter, and TexImage commands, BindTexture, En-able/Disable for any texture target (e.g., TEXTURE2D), and queries of all such state. If the texture image unit number corresponding to the current value of ACTIVETEXTUREis greater than or equal to the implementation-dependent con-stant MAXCOMBINEDTEXTUREIMAGEUNITS, the error INVALIDOPERATIONis generated by any such command.

ActiveTexture generates the error INVALIDENUMif an invalid texture is spec-ified. texture is a symbolic constant of the form TEXTUREi, indicating that tex-ture unit iis to be modified. The constants obey TEXTUREi=TEXTURE0+i(iis in the range 0 to k1, where kis the larger of MAXTEXTURECOORDSand MAXCOMBINEDTEXTUREIMAGEUNITS).

For backwards compatibility, the implementation-dependent constant MAXTEXTUREUNITSspecifies the number of conventional texture units supported by the implementation. Its value must be no larger than the minimum of MAXTEXTURECOORDSand MAXCOMBINEDTEXTUREIMAGEUNITS.

There is a stack of matrices for each of matrix modes MODELVIEW, PROJECTION, and COLOR, and for each texture unit. For MODELVIEWmode, the stack depth is at least 32 (that is, there is a stack of at least 32 model-view ma-trices). For the other modes, the depth is at least 2. Texture matrix stacks for all texture units have the same depth. The current matrix in any mode is the matrix on the top of the stack for that mode.

voidPushMatrix( void);

pushes the stack down by one, duplicating the current matrix in both the top of the stack and the entry below it.

voidPopMatrix( void);

pops the top entry off of the stack, replacing the current matrix with the matrix that was the second entry in the stack. The pushing or popping takes place on the stack corresponding to the current matrix mode. Popping a matrix off a stack with only one entry generates the error STACKUNDERFLOW; pushing a matrix onto a full stack generates STACKOVERFLOW.

When the current matrix mode is TEXTURE, the texture matrix stack of the active texture unit is pushed or popped.

The state required to implement transformations consists of an integer for the active texture unit selector, a four-valued integer indicating the current matrix

2.12. COORDINATETRANSFORMATIONS

mode, one stack of at least two 4 × 4matrices for each of COLOR, PROJECTION, and each texture coordinate set, TEXTURE; and a stack of at least 32 4× 4matri-ces for MODELVIEW. Each matrix stack has an associated stack pointer. Initially, there is only one matrix on each stack, and all matrices are set to the identity. The initial active texture unit selector is TEXTURE0, and the initial matrix mode is MODELVIEW.

2.12.3 Normal Transformation

Finally, we consider how the model-view matrix and transformation state affect normals. Before use in lighting, normals are transformed to eye coordinates by a matrix derived from the model-view matrix. Rescaling and normalization opera-tions are performed on the transformed normals to make them unit length prior to use in lighting. Rescaling and normalization are controlled by

voidEnable( enumtarget );

and

voidDisable( enumtarget );

with target equal to RESCALENORMALor NORMALIZE. This requires two bits of state. The initial state is for normals not to be rescaled or normalized.

If the model-view matrix is M, then the normal is transformed to eye coordi-nates by:

M1 nxnynzq=nxnynzq

·



⎜⎜⎝

x y z w

⎟⎟⎠

where, if

are the associated vertex coordinates, then

⎪⎨

0,

w =0,

10 BBBB@

x

y
z

CCCCA

«

(2.1)

q =

nxnynz

⎪⎩

=0

,w

w

Implementations may choose instead to transform nxnynzto eye coor-dinates using

2.12. COORDINATETRANSFORMATIONS

nxnynz=nxnynzMu1

·

where Muis the upper leftmost 3x3 matrix taken from M. Rescale multiplies the transformed normals by a scale factor

nxnynz=fnxnynz

If rescaling is disabled, then f=1. If rescaling is enabled, then fis computed as (mijdenotes the matrix element in row iand column jof M1, numbering the topmost row of the matrix as row 1 and the leftmost column as column 1)

1

f =

2

m312 +m322 +m33

Note that if the normals sent to GL were unit length and the model-view matrix uniformly scales space, then rescale makes the transformed normals unit length.

Alternatively, an implementation may choose f as

1

f=�nx2 +ny2 +nz2

recomputing f for each normal. This makes all non-zero length normals unit length regardless of their input length and the nature of the model-view matrix.

After rescaling, the final transformed normal used in lighting, nf, is computed as

nf=mnxnynz

If normalization is disabled, then m=1. Otherwise

1

m = �

nx��2 + ny��2 + nz��2

Because we specify neither the floating-point format nor the means for matrix inversion, we cannot specify behavior in the case of a poorly-conditioned (nearly singular) model-view matrix M. In case of an exactly singular matrix, the trans-formed normal is undefined. If the GL implementation determines that the model-view matrix is uninvertible, then the entries in the inverted matrix are arbitrary. In any case, neither normal transformation nor use of the transformed normal may lead to GL interruption or termination.

2.12. COORDINATETRANSFORMATIONS

2.12.4 Generating Texture Coordinates

Texture coordinates associated with a vertex may either be taken from the current texture coordinates or generated according to a function dependent on vertex coor-dinates. The command

void TexGen{ifd}( enumcoord,enumpname,Tparam );

void TexGen{ifd}v( enumcoord,enumpname,Tparams );

controls texture coordinate generation. coord must be one of the constants S, T, R, or Q, indicating that the pertinent coordinate is the s, t, r, or qcoordinate, re-spectively. In the first form of the command, param is a symbolic constant speci-fying a single-valued texture generation parameter; in the second form, params is a pointer to an array of values that specify texture generation parameters. pname must be one of the three symbolic constants TEXTUREGENMODE, OBJECTPLANE, or EYEPLANE. If pname is TEXTUREGENMODE, then either params points to or param is an integer that is one of the symbolic constants OBJECTLINEAR, EYELINEAR, SPHEREMAP, REFLECTIONMAP, or NORMALMAP.

If TEXTUREGENMODEindicates OBJECTLINEAR, then the generation func-tion for the coordinate indicated by coord is

g=p1xo+p2yo+p3zo+p4wo.

xo, yo, zo, and woare the object coordinates of the vertex. p1,...,p4are specified by calling TexGen with pname set to OBJECTPLANEin which case params points to an array containing p1,...,p4. There is a distinct group of plane equation co-efficients for each texture coordinate; coord indicates the coordinate to which the specified coefficients pertain.

If TEXTUREGENMODEindicates EYELINEAR, then the function is

g=p1xe+p2ye+p3ze+p4we

where

p1 p2 p3 p4 =p1p2p3p4M1

xe, ye, ze, and weare the eye coordinates of the vertex. p1,...,p4are set by calling TexGen with pname set to EYEPLANEin correspondence with setting the coefficients in the OBJECTPLANEcase. Mis the model-view matrix in effect when p1,...,p4are specified. Computed texture coordinates may be inaccurate or undefined if Mis poorly conditioned or singular.

When used with a suitably constructed texture image, calling TexGen with TEXTUREGENMODEindicating SPHEREMAPcan simulate the reflected image of

2.12. COORDINATETRANSFORMATIONS

a spherical environment on a polygon. SPHEREMAPtexture coordinates are gen-erated as follows. Denote the unit vector pointing from the origin to the vertex (in eye coordinates) by u. Denote the current normal, after transformation to eye

��T

coordinates, by nf. Let r=rxryrz, the reflection vector, be given by

r = u 2nfT (nfu),

22

and let m =2 rx + ry +(rz+1)2. Then the value assigned to an scoordinate (the first TexGen argument value is S) is s=rx/m+12 ; the value assigned to a tcoordinate is t=ry/m+21 . Calling TexGen with a coord of either Ror Qwhen pname indicates SPHEREMAPgenerates the error INVALIDENUM.

If TEXTUREGENMODEindicates REFLECTIONMAP, compute the reflection vector ras described for the SPHEREMAPmode. Then the value assigned to an scoordinate is s=rx; the value assigned to a tcoordinate is t=ry; and the value assigned to an rcoordinate is r=rz. Calling TexGen with a coord of Qwhen pname indicates REFLECTIONMAPgenerates the error INVALIDENUM.

If TEXTUREGENMODEindicates NORMALMAP, compute the normal vector nfas described in section 2.12.3. Then the value assigned to an scoordinate is s=nfx; the value assigned to a tcoordinate is t=nfy; and the value assigned to an rcoordinate is r=nfz (the values nfx, nfy, and nfz are the components of nf.) Calling TexGen with a coord of Qwhen pname indicates NORMALMAPgenerates the error INVALIDENUM.

A texture coordinate generation function is enabled or disabled using En-able and Disable with an argument of TEXTUREGENS, TEXTUREGENT, TEXTUREGENR, or TEXTUREGENQ(each indicates the corresponding texture co-ordinate). When enabled, the specified texture coordinate is computed according to the current EYELINEAR, OBJECTLINEARor SPHEREMAPspecification, de-pending on the current setting of TEXTUREGENMODEfor that coordinate. When disabled, subsequent vertices will take the indicated texture coordinate from the current texture coordinates.

The state required for texture coordinate generation for each texture unit com-prises a five-valued integer for each coordinate indicating coordinate generation mode, and a bit for each coordinate to indicate whether texture coordinate genera-tion is enabled or disabled. In addition, four coefficients are required for the four coordinates for each of EYELINEARand OBJECTLINEAR. The initial state has the texture generation function disabled for all texture coordinates. The initial values of pifor sare all 0 except p1which is one; for tall the piare zero except p2, which is 1. The values of pifor rand qare all 0. These values of piapply for both the EYELINEARand OBJECTLINEARversions. Initially all texture generation modes are EYELINEAR.

2.13. ASYNCHRONOUSQUERIES

2.13 Asynchronous Queries

Asynchronous queries provide a mechanism to return information about the pro-cessing of a sequence of GL commands. There are two query types supported by the GL. Transform feedback queries (see section 2.15) returns information on the number of vertices and primitives processed by the GL and written to one or more buffer objects. Occlusion queries (see section 4.1.7) count the number of fragments or samples that pass the depth test.

The results of asynchronous queries are not returned by the GL immediately after the completion of the last command in the set; subsequent commands can be processed while the query results are not complete. When available, the query results are stored in an associated query object. The commands described in section 6.1.12 provide mechanisms to determine when query results are available and return the actual results of the query. The name space for query objects is the unsigned integers, with zero reserved by the GL.

Each type of query supported by the GL has an active query object name. If the active query object name for a query type is non-zero, the GL is currently tracking the information corresponding to that query type and the query results will be written into the corresponding query object. If the active query object for a query type name is zero, no such information is being tracked.

A query object is created by calling

voidBeginQuery( enumtarget,uintid );

with an unused name id. target indicates the type of query to be performed; valid values of target are defined in subsequent sections. When a query object is created, the name id is marked as used and associated with a new query object.

BeginQuery sets the active query object name for the query type given by target to id. If BeginQuery is called with an id of zero, if the active query object name for target is non-zero, if id is the active query object name for any query type, or if id is the active query object for condtional rendering (see section 2.14), the error INVALIDOPERATIONis generated.

The command

voidEndQuery( enumtarget );

marks the end of the sequence of commands to be tracked for the query type given by target. The active query object for target is updated to indicate that query results are not available, and the active query object name for target is reset to zero. When the commands issued prior to EndQuery have completed and a final query result is available, the query object, active when EndQuery is, called is updated by the

2.14. CONDITIONAL RENDERING

GL. The query object is updated to indicate that the query results are available and to contain the query result. If the active query object name for target is zero when EndQuery is called, the error INVALIDOPERATIONis generated.

The command

voidGenQueries( sizein,uint*ids );

returns n previously unused query object names in ids. These names are marked as used, but no object is associated with them until the first time they are used by BeginQuery.

Query objects are deleted by calling

voidDeleteQueries( sizein,constuint*ids );

ids contains n names of query objects to be deleted. After a query object is deleted, its name is again unused. Unused names in ids are silently ignored.

Query objects contain two pieces of state: a single bit indicating whether a query result is available, and an integer containing the query result value. The number of bits used to represent the query result is implementation-dependent. In the initial state of a query object, the result is available and its value is zero.

The necessary state for each query type is an unsigned integer holding the active query object name (zero if no query object is active), and any state necessary to keep the current results of an asynchronous query in progress.

2.14 Conditional Rendering

Conditional rendering can be used to discard rendering commands based on the result of an occlusion query. Conditional rendering is started and stopped using the commands

voidBeginConditionalRender( uintid,enummode );

voidEndConditionalRender( void);

id specifies the name of an occlusion query object whose results are used to deter-mine if the rendering commands are discarded. If the result (SAMPLESPASSED) of the query is zero, all rendering commands between BeginConditionalRender and the corresponding EndConditionalRender are discarded. In this case, Begin, End, all vertex array commands performing an implicit Begin and End, Draw-Pixels (see section 3.7.4), Bitmap (see section 3.8), Clear (see section 4.2.3), Accum (see section 4.2.4), CopyPixels (see section 4.3.3), and EvalMesh1 and

2.15. TRANSFORM FEEDBACK

EvalMesh2 (see section 5.1) have no effect. The effect of commands setting cur- rent vertex state, such as Color or VertexAttrib, are undefined. If the result of the occlusion query is non-zero, such commands are not discarded.

mode specifies how BeginConditionalRender interprets the results of the oc-clusion query given by id. If mode is QUERYWAIT, the GL waits for the results of the query to be available and then uses the results to determine if subsquent render-ing commands are discarded. If mode is QUERYNOWAIT, the GL may choose to unconditionally execute the subsequent rendering commands without waiting for the query to complete.

If mode is QUERYBYREGIONWAIT, the GL will also wait for occlusion query results and discard rendering commands if the result of the occlusion query is zero. If the query result is non-zero, subsequent rendering commands are executed, but the GL may discard the results of the commands for any region of the framebuffer that did not contribute to the sample count in the specified occlusion query. Any such discarding is done in an implementation-dependent manner, but the render-ing command results may not be discarded for any samples that contributed to the occlusion query sample count. If mode is QUERYBYREGIONNOWAIT, the GL op-erates as in QUERYBYREGIONWAIT, but may choose to unconditionally execute the subsequent rendering commands without waiting for the query to complete.

If BeginConditionalRender is called while conditional rendering is in progress, or if EndConditionalRender is called while conditional rendering is not in progress, the error INVALIDOPERATIONis generated. The error INVALIDVALUEis generated if id is not the name of an existing query object query. The error INVALIDOPERATIONis generated if id is the name of a query object with a target other than SAMPLESPASSED, or id is the name of a query currently in progress.

2.15 Transform Feedback

In transform feedback mode, attributes of the vertices of transformed primitives processed by a vertex shader are written out to one or more buffer objects. The vertices are fed back after vertex color clamping, but before clipping. The trans-formed vertices may be optionally discarded after being stored into one or more buffer objects, or they can be passed on down to the clipping stage for further processing. The set of attributes captured is determined when a program is linked.

Transform feedback is started and finished by calling

voidBeginTransformFeedback( enumprimitiveMode );

and

2.15. TRANSFORM FEEDBACK

Transform Feedback primitiveMode Allowed render primitive (Begin) modes
POINTS POINTS
LINES LINES, LINELOOP, LINESTRIP
TRIANGLES TRIANGLES, TRIANGLESTRIP, TRIANGLEFANQUADS, QUADSTRIP, POLYGON

Table 2.9: Legal combinations of the transform feedback primitive mode, as passed to BeginTransformFeedback, and the current primitive mode.

voidEndTransformFeedback( void);

respectively. Transform feedback is said to be active after a call to BeginTrans-formFeedback and inactive after a call to EndTransformFeedback. primitive-Mode is one of TRIANGLES, LINES, or POINTS, and specifies the output type of primitives that will be recorded into the buffer objects bound for transform feed-back (see below). primitiveMode restricts the primitive types that may be rendered while transform feedback is active, as shown in table 2.9.

Transform feedback commands must be paired; the error INVALIDOPERATIONis generated by BeginTransformFeedback if trans-form feedback is active, and by EndTransformFeedback if transform feedback is inactive.

Transform feedback mode captures the values of varying variables written by an active vertex shader. The error INVALIDOPERATIONis generated by Begin-TransformFeedback if no vertex shader is active.

When transform feedback is active, all geometric primitives generated must be compatible with the value of primitiveMode passed to BeginTransformFeedback. The error INVALIDOPERATIONis generated by Begin or any operation that im-plicitly calls Begin (such as DrawElements) if mode is not one of the allowed modes in table 2.9.

Buffer objects are made to be targets of transform feedback by calling one of the commands

voidBindBufferRange( enumtarget,uintindex,uintbuffer,intptroffset,sizeiptrsize );voidBindBufferBase( enumtarget,uintindex,uintbuffer );

with target set to TRANSFORMFEEDBACKBUFFER. There is an array of buffer object binding points that are used while transform feedback is active, plus a

2.15. TRANSFORM FEEDBACK

single general binding point that can be used by other buffer object manipu-lation functions (e.g., BindBuffer, MapBuffer). Both commands bind the buffer object named by buffer to the general binding point, and additionally bind the buffer object to the binding point in the array given by index. The error INVALIDVALUEis generated if index is greater than or equal to the value of MAXTRANSFORMFEEDBACKSEPARATEATTRIBS.

For BindBufferRange, offset specifies a starting offset into the buffer object buffer, and size specifies the amount of data that can be written to the buffer object while transform feedback mode is active. Both offset and size are in basic machine units. The error INVALIDVALUEis generated if the value of size is less than or equal to zero, if offset+sizeis greater than the value of BUFFERSIZE, or if either offset or size are not word-aligned. BindBufferBase is equivalent to calling BindBufferRange with offset zero and size equal to the size of buffer, rounded down so that it is word-aligned.

When an individual point, line, or triangle primitive reaches the transform feed-back stage while transform feedback is active, the values of the specified varying variables of the vertex are appended to the buffer objects bound to the transform feedback binding points. The attributes of the first vertex received after Begin-TransformFeedback are written at the starting offsets of the bound buffer objects set by BindBufferRange, and subsequent vertex attributes are appended to the buffer object. When capturing line and triangle primitives, all attributes of the first vertex are written first, followed by attributes of the subsequent vertices. When writing varying variables that are arrays, individual array elements are written in order. For multi-component varying variables or varying array elements, the indi-vidual components are written in order. The value for any attribute specified to be streamed to a buffer object but not actually written by a vertex shader is undefined.

When quads and polygons are provided to transform feedback with a primitive mode of TRIANGLES, they will be tessellated and recorded as triangles (the order of tessellation within a primitive is undefined). Individual lines or triangles of a strip or fan primitive will be extracted and recorded separately. Incomplete primitives are not recorded.

Transform feedback can operate in either INTERLEAVEDATTRIBSor SEPARATEATTRIBSmode. In INTERLEAVEDATTRIBSmode, the values of one or more varyings are written, interleaved, into the buffer object bound to the first transform feedback binding point (index=0). If more than one varying variable is written, they will be recorded in the order specified by TransformFeedbackVaryings (see section 2.20.3). In SEPARATEATTRIBSmode, the first varying variable specified by TransformFeedbackVaryings is written to the first transform feedback binding point; subsequent varying vari-ables are written to the subsequent transform feedback binding points. The

2.15. TRANSFORM FEEDBACK

total number of variables that may be captured in separate mode is given by MAXTRANSFORMFEEDBACKSEPARATEATTRIBS.

If recording the vertices of a primitive to the buffer objects being used for transform feedback purposes would result in either exceeding the limits of any buffer object’s size, or in exceeding the end position offset+size1, as set by BindBufferRange, then no vertices of that primitive are recorded in any buffer object, and the counter corresponding to the asynchronous query target TRANSFORMFEEDBACKPRIMITIVESWRITTEN(see section 2.16) is not incre- mented.

In either separate or interleaved modes, all transform feedback binding points that will be written to must have buffer objects bound when BeginTransform-Feedback is called. The error INVALIDOPERATIONis generated by BeginTrans-formFeedback if any binding point used in transform feedback mode does not have a buffer object bound. In interleaved mode, only the first buffer object bind-ing point is ever written to. The error INVALIDOPERATIONis also generated by BeginTransformFeedback if no binding points would be used, either because no program object is active or because the active program object has specified no varying variables to record.

While transform feedback is active, the set of attached buffer objects and the set of varying variables captured may not be changed. If transform feedback is active, the error INVALIDOPERATIONis generated by UseProgram, by LinkProgram if program is the currently active program object, and by BindBufferRange or BindBufferBase if target is TRANSFORMFEEDBACKBUFFER.

Buffers should not be bound or in use for both transform feedback and other purposes in the GL. Specifically,

If a buffer object is simultaneously bound to a transform feedback buffer bind-ing point and elsewhere in the GL, any writes to or reads from the buffer generate undefined values. Examples of such bindings include DrawPixels and ReadPixels to a pixel buffer object binding point and client access to a buffer mapped with MapBuffer.

However, if a buffer object is written and read sequentially by transform feed-back and other mechanisms, it is the responsibility of the GL to ensure that data are accessed consistently, even if the implementation performs the operations in a pipelined manner. For example, MapBuffer may need to block pending the com-pletion of a previous transform feedback operation.

2.16. PRIMITIVE QUERIES

2.16 Primitive Queries

Primitive queries use query objects to track the number of primitives generated by the GL and to track the number of primitives written to transform feedback buffers.

When BeginQuery is called with a target of PRIMITIVESGENERATED, the primitives-generated count maintained by the GL is set to zero. When the generated primitive query is active, the primitives-generated count is incremented every time a primitive reaches the “Discarding Primitives Before Rasterization” stage (see section 3.1) immediately before rasterization.

When BeginQuery is called with a target of TRANSFORMFEEDBACKPRIMITIVESWRITTEN, the transform-feedback-primitives-written count maintained by the GL is set to zero. When the transform feedback primitive written query is active, the transform-feedback-primitives-written count is incremented every time a primitive is recorded into a buffer object. If transform feedback is not active, this counter is not incremented. If the primitive does not fit in the buffer object, the counter is not incremented.

These two queries can be used together to determine if all primitives have been written to the bound feedback buffers; if both queries are run simultaneously and the query results are equal, all primitives have been written to the buffer(s). If the number of primitives written is less than the number of primitives generated, the buffer is full.

2.17 Clipping

Primitives are clipped to the clip volume. In clip coordinates, the view volume is defined by

wc xc wc wc yc wc wc zc wc.

This view volume may be further restricted by as many as nclient-defined clip planes to generate the clip volume. (nis an implementation dependent maximum that must be at least 6.) Each client-defined plane specifies a half-space. The clip volume is the intersection of all such half-spaces with the view volume (if there no client-defined clip planes are enabled, the clip volume is the view volume).

A client-defined clip plane is specified with

voidClipPlane( enump,doubleeqn[4] );

The value of the first argument, p, is a symbolic constant, CLIPPLANEi, where iis an integer between 0 and n1, indicating one of nclient-defined clip planes. eqn

2.17. CLIPPING

is an array of four double-precision floating-point values. These are the coefficients of a plane equation in object coordinates: p1, p2, p3, and p4(in that order). The inverse of the current model-view matrix is applied to these coefficients, at the time they are specified, yielding

p1 p2 p3 p4 =p1p2p3p4M1

(where Mis the current model-view matrix; the resulting plane equation is unde-fined if Mis singular and may be inaccurate if Mis poorly-conditioned) to obtain the plane equation coefficients in eye coordinates. All points with eye coordinates

��T xe ye ze we that satisfy

⎛⎞

xe��⎜ye

⎜⎟

p1 p2 p3 p4 ⎝ze 0

we

lie in the half-space defined by the plane; points that do not satisfy this condition do not lie in the half-space. ��T

When a vertex shader is active, the vector xeyezeweis no longer computed. Instead, the value of the glClipVertexbuilt-in variable is used in its place. If glClipVertexis not written by the vertex shader, its value is undefined, which implies that the results of clipping to any client-defined clip planes are also undefined. The user must ensure that the clip vertex and client-defined clip planes are defined in the same coordinate space.

A vertex shader may, instead of writing to glClipVertex, write a single clip distance for each supported clip plane to elements of the glClipDistance[]array. The half-space corresponding to clip plane nis then given by the set of points satisfying the inequality

cn(P)0,

where cn(P)is the value of clip distance nat point P. For point primitives, cn(P)is simply the clip distance for the vertex in question. For line and triangle primitives, per-vertex clip distances are interpolated using a weighted mean, with weights derived according to the algorithms described in sections 3.5 and 3.6.

Client-defined clip planes are enabled with the generic Enable command and disabled with the Disable command. The value of the argument to either com-mand is CLIPPLANEiwhere iis an integer between 0 and n1; specifying a value of ienables or disables the plane equation with index i. The constants obey CLIPPLANEi=CLIPPLANE0+i.

2.17. CLIPPING 69

If the primitive under consideration is a point, then clipping passes it un-changed if it lies within the clip volume; otherwise, it is discarded. If the prim-itive is a line segment, then clipping does nothing to it if it lies entirely within the clip volume and discards it if it lies entirely outside the volume. If part of the line segment lies in the volume and part lies outside, then the line segment is clipped and new vertex coordinates are computed for one or both vertices. A clipped line segment endpoint lies on both the original line segment and the boundary of the clip volume.

This clipping produces a value, 0 t 1, for each clipped vertex. If the coordinates of a clipped vertex are Pand the original vertices’ coordinates are P1and P2, then tis given by

P=tP1+(1t)P2.

The value of tis used in color, secondary color, texture coordinate, and fog coor-dinate clipping (section 2.19.8).

If the primitive is a polygon, then it is passed if every one of its edges lies entirely inside the clip volume and either clipped or discarded otherwise. Polygon clipping may cause polygon edges to be clipped, but because polygon connectivity must be maintained, these clipped edges are connected by new edges that lie along the clip volume’s boundary. Thus, clipping may require the introduction of new vertices into a polygon. Edge flags are associated with these vertices so that edges introduced by clipping are flagged as boundary (edge flag TRUE), and so that orig-inal edges of the polygon that become cut off at these vertices retain their original flags.

If it happens that a polygon intersects an edge of the clip volume’s boundary, then the clipped polygon must include a point on this boundary edge. This point must lie in the intersection of the boundary edge and the convex hull of the vertices of the original polygon. We impose this requirement because the polygon may not be exactly planar.

Primitives rendered with clip planes must satisfy a complementarity crite

rion. Suppose a single clip plane with coefficients p1 p2 p3 p4 (or a num-ber of similarly specified clip planes) is enabled and a series of primitives are drawn. Next, suppose that the original clip plane is respecified with coefficients

p1 p2 p3 p4 (and correspondingly for any other clip planes) and the primitives are drawn again (and the GL is otherwise in the same state). In this case, primitives must not be missing any pixels, nor may any pixels be drawn twice in regions where those primitives are cut by the clip planes.

The state required for clipping is at least 6 sets of plane equations (each consist-ing of four double-precision floating-point coefficients) and at least 6 correspond-ing bits indicating which of these client-defined plane equations are enabled. In the

2.18. CURRENT RASTER POSITION

initial state, all client-defined plane equation coefficients are zero and all planes are disabled.

2.18 Current Raster Position

The current raster position is used by commands that directly affect pixels in the framebuffer. These commands, which bypass vertex transformation and primitive assembly, are described in the next chapter. The current raster position, however, shares some of the characteristics of a vertex.

The current raster position is set using one of the commands

void RasterPos{234}{sifd}( Tcoords );

void RasterPos{234}{sifd}v( Tcoords );

RasterPos4 takes four values indicating x, y, z, and w. RasterPos3 (or Raster-Pos2) is analogous, but sets only x, y, and zwith wimplicitly set to 1(or only xand ywith zimplicitly set to 0and wimplicitly set to 1).

Gets of CURRENTRASTERTEXTURECOORDSare affected by the setting of the state ACTIVETEXTURE.

The coordinates are treated as if they were specified in a Vertex command. If a vertex shader is active, this vertex shader is executed using the x, y, z, and wcoordinates as the object coordinates of the vertex. Otherwise, the x, y, z, and wcoordinates are transformed by the current model-view and projection matri-ces. These coordinates, along with current values, are used to generate primary and secondary colors and texture coordinates just as is done for a vertex. The col-ors and texture coordinates so produced replace the colors and texture coordinates stored in the current raster position’s associated data. If a vertex shader is active then the current raster distance is set to the value of the shader built in varying glFogFragCoord. Otherwise, if the value of the fog source (see section 3.11) is FOGCOORD, then the current raster distance is set to the value of the current fog coordinate. Otherwise, the current raster distance is set to the distance from the origin of the eye coordinate system to the vertex as transformed by only the current model-view matrix. This distance may be approximated as discussed in section 3.11.

Since vertex shaders may be executed when the raster position is set, any at-tributes not written by the shader will result in undefined state in the current raster position. Vertex shaders should output all varying variables that would be used when rasterizing pixel primitives using the current raster position.

The transformed coordinates are passed to clipping as if they represented a point. If the “point” is not culled, then the projection to window coordinates is

2.18. CURRENT RASTER POSITION

computed (section 2.12) and saved as the current raster position, and the valid bit is set. If the “point” is culled, the current raster position and its associated data become indeterminate and the valid bit is cleared. Figure 2.7 summarizes the behavior of the current raster position.

Alternately, the current raster position may be set by one of the WindowPos commands:

void WindowPos{23}{ifds}( Tcoords );

void WindowPos{23}{ifds}v( constTcoords );

WindowPos3 takes three values indicating x, yand z, while WindowPos2 takes two values indicating xand ywith zimplicitly set to 0. The current raster position, (xw,yw,zw,wc), is defined by:

xw = x

yw = y

zw =

⎧ ⎪⎨ ⎪⎩

n, z 0

f, z 1

n+z(fn),otherwisewc=1

where nand fare the values passed to DepthRange (see section 2.12.1).

Lighting, texture coordinate generation and transformation, and clipping are not performed by the WindowPos functions. Instead, in RGBA mode, the current raster color and secondary color are obtained from the current color and secondary color, respectively. If vertex color clamping is enabled, the current raster color and secondary color are clamped to [0,1]. In color index mode, the current raster color index is set to the current color index. The current raster texture coordinates are set to the current texture coordinates, and the valid bit is set.

If the value of the fog source is FOGCOORDSRC, then the current raster dis-tance is set to the value of the current fog coordinate. Otherwise, the raster distance is set to 0.

The current raster position requires six single-precision floating-point values for its xw, yw, and zwwindow coordinates, its wcclip coordinate, its raster distance (used as the fog coordinate in raster processing), a single valid bit, four floating-point values to store the current RGBA color, four floating-point values to store the current RGBA secondary color, one floating-point value to store the current color index, and 4 floating-point values for texture coordinates for each texture unit. In the initial state, the coordinates and texture coordinates are all (0,0,0,1), the eye

2.18. CURRENT RASTER POSITION

2.19. COLORS AND COLORING

coordinate distance is 0, the fog coordinate is 0, the valid bit is set, the associated RGBA color is (1,1,1,1), the associated RGBA secondary color is (0,0,0,1), and the associated color index color is 1. In RGBA mode, the associated color index always has its initial value; in color index mode, the RGBA color and secondary color always maintain their initial values.

2.19 Colors and Coloring

Figures 2.8 and 2.9 diagram the processing of RGBA colors and color indices be-fore rasterization. Incoming colors arrive in one of several formats. Table 2.10 summarizes the conversions that take place on R, G, B, and A components depend-ing on which version of the Color command was invoked to specify the compo-nents. As a result of limited precision, some converted values will not be repre-sented exactly. In color index mode, a single-valued color index is not mapped.

Next, lighting, if enabled, produces either a color index or primary and sec-ondary colors. If lighting is disabled, the current color index or current color (primary color) and current secondary color are used in further processing. Af-ter lighting, RGBA colors may be clamped to the range [0,1]as described in section 2.19.6. A color index is converted to fixed-point and then its integer por-

2.19. COLORS AND COLORING

GL Type of c Conversion to floating-point
ubyte c281
byte 2c+1281
ushort c2161
short 2c+12161
uint c2321
int 2c+12321
half c
float c
double c

Table 2.10: Component conversions. Color, normal, and depth component values

(c) of different types are converted to an internal floating-point representation using the equations in this table. All arithmetic is done in the internal floating point format. These conversions apply to components specified as parameters to GL commands and to components in pixel data. The equations remain the same even if the implemented ranges of the GL data types are greater than the minimum required ranges. (Refer to table 2.2)

2.19. COLORS AND COLORING

tion is masked (see section 2.19.6). After clamping or masking, a primitive may be flatshaded, indicating that all vertices of the primitive are to have the same col-ors. Finally, if a primitive is clipped, then colors (and texture coordinates) must be computed at the vertices introduced or modified by clipping.

2.19.1 Lighting

GL lighting computes colors for each vertex sent to the GL. This is accomplished by applying an equation defined by a client-specified lighting model to a collection of parameters that can include the vertex coordinates, the coordinates of one or more light sources, the current normal, and parameters defining the characteristics of the light sources and a current material. The following discussion assumes that the GL is in RGBA mode. (Color index lighting is described in section 2.19.5.)

Lighting is turned on or off using the generic Enable or Disable commands with the symbolic value LIGHTING. If lighting is off, the current color and current secondary color are assigned to the vertex primary and secondary color, respec-tively. If lighting is on, colors computed from the current lighting parameters are assigned to the vertex primary and secondary colors.

Lighting Operation

A lighting parameter is of one of five types: color, position, direction, real, or boolean. A color parameter consists of four floating-point values, one for each of R, G, B, and A, in that order. There are no restrictions on the allowable values for these parameters. A position parameter consists of four floating-point coordinates (x, y, z, and w) that specify a position in object coordinates (wmay be zero, indicating a point at infinity in the direction given by x, y, and z). A direction parameter consists of three floating-point coordinates (x, y, and z) that specify a direction in object coordinates. A real parameter is one floating-point value. The various values and their types are summarized in table 2.11. The result of a lighting computation is undefined if a value for a parameter is specified that is outside the range given for that parameter in the table.

There are nlight sources, indexed by i=0,...,n1.(nis an implementation dependent maximum that must be at least 8.) Note that the default values for dcliand sclidiffer for i=0and i>0.

Before specifying the way that lighting computes colors, we introduce oper-ators and notation that simplify the expressions involved. If c1and c2are col-ors without alpha where c1=(r1,g1,b1)and c2=(r2,g2,b2), then define c1c2=(r1r2,g1g2,b1b2). Addition of colors is accomplished by addition of

2.19. COLORS AND COLORING

acmcolor (0.2,0.2,0.2,1.0)ambient color of material
dcmcolor (0.8,0.8,0.8,1.0)diffuse color of material
scmcolor (0.0,0.0,0.0,1.0)specular color of material
ecmcolor (0.0,0.0,0.0,1.0)emissive color of material
srmreal 0.0 specular exponent (range: [0.0,128.0])
am real 0.0ambient color index
dm real 1.0diffuse color index
sm real 1.0specular color index

Light Source Parameters

aclicolor (0.0,0.0,0.0,1.0)ambient intensity of light i
dcli(i=0)dcli(i>0)color color (1.0,1.0,1.0,1.0)(0.0,0.0,0.0,1.0)diffuse intensity of light 0diffuse intensity of light i
scli(i=0)scli(i>0)color color (1.0,1.0,1.0,1.0)(0.0,0.0,0.0,1.0)specular intensity of light 0specular intensity of light i
Ppliposition (0.0,0.0,1.0,0.0)position of light i
sdlidirection (0.0,0.0,1.0)direction of spotlight for light i
srlireal 0.0 spotlight exponent for light i(range: [0.0,128.0])
crlireal 180.0 spotlight cutoff angle for light i(range: [0.0,90.0], 180.0)
k0ireal 1.0 constant attenuation factor for light i (range: [0.0,))
k1ireal 0.0 linear attenuation factor for light i (range: [0.0,))
k2ireal 0.0 quadratic attenuation factor for light i (range: [0.0,))

Lighting Model Parameters

acscolor (0.2,0.2,0.2,1.0)ambient color of scene
vbsboolean FALSE viewer assumed to be at (0,0,0)in eye coordinates (TRUE) or (0,0,)(FALSE)
cesenum SINGLE COLOR controls computation of colors
tbsboolean FALSE use two-sided lighting mode

Table 2.11: Summary of lighting parameters. The range of individual color com-ponents is (−∞, +).

2.19. COLORS AND COLORING

the components. Multiplication of colors by a scalar means multiplying each com-ponent by that scalar. If d1and d2are directions, then define

d1 d2 = max{d1 · d2,0}.

(Directions are taken to have three coordinates.) If P1and P2are (homogeneous, with four coordinates) points then let −−−

P1P2be the unit vector that points from P1to P2. Note that if P2has a zero wcoordinate and P1has non-zero wcoordinate, then −−−

P1P2is the unit vector corresponding to the direction specified by the x, y, and zcoordinates of P2; if P1has a zero wcoordinate and P2has a non-zero wcoordinate then −−−

P1P2is the unit vector that is the negative of that corresponding to the direction specified by P1. If both P1and P2have zero wcoordinates, then

−−−

P1P2is the unit vector obtained by normalizing the direction corresponding to P2P1.

If dis an arbitrary direction, then let dˆbe the unit vector in d’s direction. Let P1P2be the distance between P1and P2. Finally, let Vbe the point corre-sponding to the vertex being lit, and nbe the corresponding normal. Let Pebe the eyepoint ((0,0,0,1)in eye coordinates).

Lighting produces two colors at a vertex: a primary color cpriand a secondary color csec. The values of cpriand csecdepend on the light model color control, ces. If ces=SINGLECOLOR, then the equations to compute cpriand csecare

cpri=ecm

+ acmacsn1

+(atti)(spoti)[acmaclii=0+(n

VPpli)dcmdcli

+(fi)(nhˆi)srm scmscli]

csec=(0,0,0,1)

If ces=SEPARATESPECULARCOLOR, then

cpri=ecm

+ acmacsn1

+(atti)(spoti)[acmacli

i=0+(n

VPpli)dcmdcli]

n1

csec=(atti)(spoti)(fi)(nhˆi)srm scmsclii=0

2.19. COLORS AND COLORING

where

1, n =0,

fi = VPpli (2.2)

0, otherwise,

VPe,

VPpli+

vbs=TRUE,

hi

=

�T

(2.3)

001

VPpli+

,vbs=FALSE,

⎧⎨⎩⎧⎪⎨⎪⎩

1 =0,k0i+k1iVPpli+k2iVPpli2 ,if Ppli’s w (2.4)

1.0,otherwise.

atti =

spoti =

(−−−→sdli)srli , = 180.0,−−−→sdlicos(crli),

PpliVˆcrliPpliVˆ0.0,=180.0,−−−→sdli<cos(crli),(2.5)

crliPpliVˆcrli=180.0.

1.0,

All computations are carried out in eye coordinates.

The value of A produced by lighting is the alpha value associated with dcm. A is always associated with the primary color cpri; the alpha component of csecis always 1.

Results of lighting are undefined if the wecoordinate (win eye coordinates) of Vis zero.

Lighting may operate in two-sided mode (tbs=TRUE), in which a front color is computed with one set of material parameters (the front material) and a back color is computed with a second set of material parameters (the back material). This second computation replaces nwith n. If tbs=FALSE, then the back color and front color are both assigned the color computed using the front material with

n.

Additionally, vertex shaders can operate in two-sided color mode. When a ver-tex shader is active, front and back colors can be computed by the vertex shader and written to the glFrontColor, glBackColor, glFrontSecondaryColorand glBackSecondaryColoroutputs. If VERTEXPROGRAMTWOSIDEis en-abled, the GL chooses between front and back colors, as described below. Oth-erwise, the front color output is always selected. Two-sided color mode is

2.19. COLORS AND COLORING

enabled and disabled by calling Enable or Disable with the symbolic value VERTEXPROGRAMTWOSIDE.

The selection between back and front colors depends on the primitive of which the vertex being lit is a part. If the primitive is a point or a line segment, the front color is always selected. If it is a polygon, then the selection is based on the sign of the (clipped or unclipped) polygon’s signed area computed in window coordinates. One way to compute this area is

n1

1

a = x i y i1 i1 y i (2.6)

2 ww xwwi=0

where xwi and ywi are the xand ywindow coordinates of the ith vertex of the n-vertex polygon (vertices are numbered starting at zero for purposes of this com-putation) and i1is (i+1)modn. The interpretation of the sign of this value is controlled with

voidFrontFace( enumdir );

Setting dir to CCW(corresponding to counter-clockwise orientation of the projected polygon in window coordinates) indicates that if a0, then the color of each vertex of the polygon becomes the back color computed for that vertex while if a>0, then the front color is selected. If dir is CW, then ais replaced by ain the above inequalities. This requires one bit of state; initially, it indicates CCW.

2.19.2 Lighting Parameter Specification

Lighting parameters are divided into three categories: material parameters, light source parameters, and lighting model parameters (see table 2.11). Sets of lighting parameters are specified with

void Material{if}( enumface,enumpname,Tparam );

void Material{if}v( enumface,enumpname,Tparams );

void Light{if}( enumlight,enumpname,Tparam );

void Light{if}v( enumlight,enumpname,Tparams );

void LightModel{if}( enumpname,Tparam );

void LightModel{if}v( enumpname,Tparams );

pname is a symbolic constant indicating which parameter is to be set (see ta-ble 2.12). In the vector versions of the commands, params is a pointer to a group of values to which to set the indicated parameter. The number of values pointed to depends on the parameter being set. In the non-vector versions, param is a value to

2.19. COLORS AND COLORING

which to set a single-valued parameter. (If param corresponds to a multi-valued pa-rameter, the error INVALIDENUMresults.) For the Material command, face must be one of FRONT, BACK, or FRONTANDBACK, indicating that the property name of the front or back material, or both, respectively, should be set. In the case of Light, light is a symbolic constant of the form LIGHTi, indicating that light iis to have the specified parameter set. The constants obey LIGHTi=LIGHT0+i.

Table 2.12 gives, for each of the three parameter groups, the correspondence between the pre-defined constant names and their names in the lighting equations, along with the number of values that must be specified with each. Color parame-ters specified with Material and Light are converted to floating-point values (if specified as integers) as indicated in table 2.10 for signed integers. The error INVALIDVALUEoccurs if a specified lighting parameter lies outside the allowable range given in table 2.11. (The symbol “ ” indicates the maximum representable magnitude for the indicated type.)

Material properties can be changed inside a Begin/End pair by calling Ma-terial. However, when a vertex shader is active such property changes are not guaranteed to update material parameters, defined in table 2.12, until the following End command.

The current model-view matrix is applied to the position parameter indicated with Light for a particular light source when that position is specified. These transformed values are the values used in the lighting equation.

The spotlight direction is transformed when it is specified using only the upper leftmost 3x3 portion of the model-view matrix. That is, if Muis the upper left 3x3 matrix taken from the current model-view matrix M, then the spotlight direction

⎛⎞

dx
dy

dz

is transformed to ⎛⎞ ⎛⎞

d�x dx
dy = Mu dy .

d�z dz

An individual light is enabled or disabled by calling Enable or Disable with the symbolic value LIGHTi(iis in the range 0 to n1, where nis the implementation-dependent number of lights). If light iis disabled, the ith term in the lighting equation is effectively removed from the summation.

2.19. COLORS AND COLORING

Material Parameters (Material)

acmAMBIENT 4
dcmDIFFUSE 4
acm,dcmAMBIENT AND DIFFUSE 4
scmSPECULAR 4
ecmEMISSION 4
srmSHININESS 1
am,dm,smCOLOR INDEXES 3

Light Source Parameters (Light)

acliAMBIENT 4
dcliDIFFUSE 4
scliSPECULAR 4
PpliPOSITION 4
sdliSPOT DIRECTION 3
srliSPOT EXPONENT 1
crliSPOT CUTOFF 1
k0 CONSTANT ATTENUATION 1
k1 LINEAR ATTENUATION 1
k2 QUADRATIC ATTENUATION 1

Lighting Model Parameters (LightModel)

acsLIGHT MODEL AMBIENT 4
vbsLIGHT MODEL LOCAL VIEWER 1
tbsLIGHT MODEL TWO SIDE 1
cesLIGHT MODEL COLOR CONTROL 1

Table 2.12: Correspondence of lighting parameter symbols to names. AMBIENTANDDIFFUSEis used to set acmand dcmto the same value.

2.19. COLORS AND COLORING

2.19. COLORS AND COLORING

2.19.3 ColorMaterial

It is possible to attach one or more material properties to the current color, so that they continuously track its component values. This behavior is enabled and disabled by calling Enable or Disable with the symbolic value COLORMATERIAL.

The command that controls which of these modes is selected is

voidColorMaterial( enumface,enummode );

face is one of FRONT, BACK, or FRONTANDBACK, indicating whether the front material, back material, or both are affected by the current color. mode is one of EMISSION, AMBIENT, DIFFUSE, SPECULAR, or AMBIENTANDDIFFUSEand specifies which material property or properties track the current color. If mode is EMISSION, AMBIENT, DIFFUSE, or SPECULAR, then the value of ecm, acm, dcmor scm, respectively, will track the current color. If mode is AMBIENTANDDIFFUSE, both acmand dcmtrack the current color. The replacements made to material prop-erties are permanent; the replaced values remain until changed by either sending a new color or by setting a new material value when ColorMaterial is not currently enabled to override that particular value. When COLORMATERIALis enabled, the indicated parameter or parameters always track the current color. For instance, calling

ColorMaterial(FRONT, AMBIENT)

while COLORMATERIALis enabled sets the front material acmto the value of the current color.

Material properties can be changed inside a Begin/End pair indirectly by en-abling ColorMaterial mode and making Color calls. However, when a vertex shader is active such property changes are not guaranteed to update material pa-rameters, defined in table 2.12, until the following End command.

2.19.4 Lighting State

The state required for lighting consists of all of the lighting parameters (front and back material parameters, lighting model parameters, and at least 8 sets of light parameters), a bit indicating whether a back color distinct from the front color should be computed, at least 8 bits to indicate which lights are enabled, a five-valued variable indicating the current ColorMaterial mode, a bit indicat-ing whether or not COLORMATERIALis enabled, and a single bit to indicate whether lighting is enabled or disabled. In the initial state, all lighting parame-ters have their default values. Back color evaluation does not take place, Color-Material is FRONTANDBACKand AMBIENTANDDIFFUSE, and both lighting and COLORMATERIALare disabled.

2.19. COLORS AND COLORING

2.19.5 Color Index Lighting

A simplified lighting computation applies in color index mode that uses many of the parameters controlling RGBA lighting, but none of the RGBA material param-eters. First, the RGBA diffuse and specular intensities of light i(dcliand scli, respectively) determine color index diffuse and specular light intensities, dliand slifrom

dli=(.30)R(dcli)+(.59)G(dcli)+(.11)B(dcli)

and

sli=(.30)R(scli)+(.59)G(scli)+(.11)B(scli).

R(x)indicates the R component of the color xand similarly for G(x)and B(x). Next, let

n

s=(atti)(spoti)(sli)(fi)(nhˆi)srm i=0

where attiand spotiare given by equations 2.4 and 2.5, respectively, and fiand hˆi are given by equations 2.2 and 2.3, respectively. Let s=min{s, 1}. Finally, let

nd = (atti)(spoti)(dli)(n

VPpli).i=0

Then color index lighting produces a value c, given by

c=am+d(1s)(dmam)+s(smam).

The final color index is

c� = min{c,sm}.

Thevalues am, dmand smare material properties described in tables 2.11 and 2.12. Any ambient light intensities are incorporated into am. As with RGBA lighting, disabled lights cause the corresponding terms from the summations to be omitted. The interpretation of tbsand the calculation of front and back colors is carried out as has already been described for RGBA lighting.

The values am, dm, and smare set with Material using a pname of COLORINDEXES. Their initial values are 0, 1, and 1, respectively. The additional state consists of three floating-point values. These values have no effect on RGBA lighting.

Version 3.0 -August 11, 2008

2.19. COLORS AND COLORING

2.19.6 Clamping or Masking

When the GL is in RGBA mode and vertex color clamping is enabled, all compo-nents of both primary and secondary colors are clamped to the range [0,1]after lighting. If color clamping is disabled, the primary and secondary colors are un-modified. Vertex color clamping is controlled by calling

voidClampColor( enumtarget,enumclamp );

with target set to CLAMPVERTEXCOLOR. If clamp is TRUE, vertex color clamp-ing is enabled; if clamp is FALSE, vertex color clamping is disabled. If clamp is FIXEDONLY, vertex color clamping is enabled if all enabled color buffers have fixed-point components.

For a color index, the index is first converted to fixed-point with an unspecified number of bits to the right of the binary point; the nearest fixed-point value is selected. Then, the bits to the right of the binary point are left alone while the integer portion is masked (bitwise ANDed) with 2n 1, where nis the number of bits in a color in the color index buffer (buffers are discussed in chapter 4).

The state required for color clamping is a three-valued integer, initially set to TRUE.

2.19.7 Flatshading

A primitive may be flatshaded, meaning that all vertices of the primitive are as-signed the same color index or the same primary and secondary colors. These colors are the colors of the vertex that spawned the primitive. For a point, these are the colors associated with the point. For a line segment, they are the colors of the second (final) vertex of the segment. For a polygon, they come from a selected vertex depending on how the polygon was generated. Table 2.13 summarizes the possibilities.

Flatshading is controlled by

voidShadeModel( enummode );

mode value must be either of the symbolic constants SMOOTHor FLAT. If mode is SMOOTH(the initial state), vertex colors are treated individually. If mode is FLAT, flatshading is turned on. ShadeModel thus requires one bit of state.

If a vertex shader is active, the flat shading control applies to the built-in vary-ing variables glFrontColor, glBackColor, glFrontSecondaryColorand glBackSecondaryColor. Non-color varying variables can be specified as being flat-shaded via the flatqualifier, as described in section 4.3.6 of the OpenGL Shading Language Specification.

2.19. COLORS AND COLORING

Primitive type of polygon i Vertex
single polygon (i1) 1
triangle strip i + 2
triangle fan i + 2
independent triangle 3i
quad strip 2i + 2
independent quad 4i

Table 2.13: Polygon flatshading color selection. The colors used for flatshading the ith polygon generated by the indicated Begin/End type are derived from the current color (if lighting is disabled) in effect when the indicated vertex is specified. If lighting is enabled, the colors are produced by lighting the indicated vertex. Vertices are numbered 1through n, where nis the number of vertices between the Begin/End pair.

2.19.8 Color and Associated Data Clipping

After lighting, clamping or masking and possible flatshading, colors are clipped. Those colors associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the colors assigned to vertices produced by clipping are clipped colors.

Let the colors assigned to the two vertices P1and P2of an unclipped edge be c1and c2. The value of t(section 2.17) for a clipped point P is used to obtain the color associated with P as

c=tc1+(1t)c2.

(For a color index color, multiplying a color by a scalar means multiplying the index by the scalar. For an RGBA color, it means multiplying each of R, G, B, and A by the scalar. Both primary and secondary colors are treated in the same fashion.) Polygon clipping may create a clipped vertex along an edge of the clip volume’s boundary. This situation is handled by noting that polygon clipping proceeds by clipping against one plane of the clip volume’s boundary at a time. Color clipping is done in the same way, so that clipped points always occur at the intersection of polygon edges (possibly already clipped) with the clip volume’s boundary.

Texture and fog coordinates, vertex shader varying variables (section 2.20.3), and point sizes computed on a per vertex basis must also be clipped when a primi-tive is clipped. The method is exactly analogous to that used for color clipping.

For vertex shader varying variables specified to be interpolated without per-

2.20. VERTEX SHADERS

spective correction (using the noperspectivequalifier), the value of tused to obtain the varying value associated with Pwill be adjusted to produce results that vary linearly in screen space.

2.19.9 Final Color Processing

In RGBA mode with vertex color clamping disabled, the floating-point RGBA components are not modified.

In RGBA mode with vertex color clamping enabled, each color component (al-ready clamped to [0,1]) may be converted (by rounding to nearest) to a fixed-point value with mbits. We assume that the fixed-point representation used represents each value k/(2m 1), where k∈{0,1,...,2m 1}, as k(e.g. 1.0 is represented in binary as a string of all ones). mmust be at least as large as the number of bits in the corresponding component of the framebuffer. mmust be at least 2 for A if the framebuffer does not contain an A component, or if there is only 1 bit of A in the framebuffer. GL implementations are not required to convert clamped color components to fixed-point.

Because a number of the form k/(2m 1)may not be represented exactly as a limited-precision floating-point quantity, we place a further requirement on the fixed-point conversion of RGBA components. Suppose that lighting is disabled, the color associated with a vertex has not been clipped, and one of Colorub, Colorus, or Colorui was used to specify that color. When these conditions are satisfied, an RGBA component must convert to a value that matches the component as specified in the Color command: if mis less than the number of bits bwith which the component was specified, then the converted value must equal the most significant mbits of the specified value; otherwise, the most significant bbits of the converted value must equal the specified value.

A color index is converted (by rounding to nearest) to a fixed-point value with at least as many bits as there are in the color index portion of the framebuffer.

2.20 Vertex Shaders

The sequence of operations described in sections 2.12 through 2.19 is a fixed-function method for processing vertex data. Applications can more generally de-scribe the operations that occur on vertex values and their associated data by using a vertex shader.

A vertex shader is an array of strings containing source code for the operations that are meant to occur on each vertex that is processed. The language used for vertex shaders is described in the OpenGL Shading Language Specification.

2.20. VERTEX SHADERS

To use a vertex shader, shader source code is first loaded into a shader ob-ject and then compiled. One or more vertex shader objects are then attached to a program object. A program object is then linked, which generates executable code from all the compiled shader objects attached to the program. When a linked program object is used as the current program object, the executable code for the vertex shaders it contains is used to process vertices.

In addition to vertex shaders, fragment shaders can be created, compiled, and linked into program objects. Fragment shaders affect the processing of fragments during rasterization, and are described in section 3.12. A single program object can contain both vertex and fragment shaders.

When the program object currently in use includes a vertex shader, its vertex shader is considered active and is used to process vertices. If the program object has no vertex shader, or no program object is currently in use, the fixed-function method for processing vertices is used instead.

2.20.1 Shader Objects

The source code that makes up a program that gets executed by one of the pro-grammable stages is encapsulated in one or more shader objects.

The name space for shader objects is the unsigned integers, with zero re-served for the GL. This name space is shared with program objects. The following sections define commands that operate on shader and program objects by name. Commands that accept shader or program object names will generate the error INVALIDVALUEif the provided name is not the name of either a shader or pro-gram object and INVALIDOPERATIONif the provided name identifies an object that is not the expected type.

To create a shader object, use the command

uintCreateShader( enumtype );

The shader object is empty when it is created. The type argument specifies the type of shader object to be created. For vertex shaders, type must be VERTEXSHADER. A non-zero name that can be used to reference the shader object is returned. If an error occurs, zero will be returned.

The command

voidShaderSource( uintshader,sizeicount,constchar**string,constint*length );

loads source code into the shader object named shader. string is an array of count pointers to optionally null-terminated character strings that make up the source

2.20. VERTEX SHADERS

code. The length argument is an array with the number of chars in each string (the string length). If an element in length is negative, its accompanying string is null-terminated. If length is NULL, all strings in the string argument are considered null-terminated. The ShaderSource command sets the source code for the shader to the text strings in the string array. If shader previously had source code loaded into it, the existing source code is completely replaced. Any length passed in excludes the null terminator in its count.

The strings that are loaded into a shader object are expected to form the source code for a valid shader as defined in the OpenGL Shading Language Specification.

Once the source code for a shader has been loaded, a shader object can be compiled with the command

voidCompileShader( uintshader );

Each shader object has a boolean status, COMPILESTATUS, that is modified as a result of compilation. This status can be queried with GetShaderiv (see sec-tion 6.1.15). This status will be set to TRUEif shader was compiled without errors and is ready for use, and FALSEotherwise. Compilation can fail for a variety of reasons as listed in the OpenGL Shading Language Specification. If Compile-Shader failed, any information about a previous compile is lost. Thus a failed compile does not restore the old state of shader.

Changing the source code of a shader object with ShaderSource does not change its compile status or the compiled shader code.

Each shader object has an information log, which is a text string that is over-written as a result of compilation. This information log can be queried with Get-ShaderInfoLog to obtain more information about the compilation attempt (see section 6.1.15).

Shader objects can be deleted with the command

voidDeleteShader( uintshader );

If shader is not attached to any program object, it is deleted immediately. Oth-erwise, shader is flagged for deletion and will be deleted when it is no longer attached to any program object. If an object is flagged for deletion, its boolean status bit DELETESTATUSis set to true. The value of DELETESTATUScan be queried with GetShaderiv (see section 6.1.15). DeleteShader will silently ignore the value zero.

2.20.2 Program Objects

The shader objects that are to be used by the programmable stages of the GL are collected together to form a program object. The programs that are executed by

2.20. VERTEX SHADERS

these programmable stages are called executables. All information necessary for defining an executable is encapsulated in a program object. A program object is created with the command

uintCreateProgram( void);

Program objects are empty when they are created. A non-zero name that can be used to reference the program object is returned. If an error occurs, 0 will be returned.

To attach a shader object to a program object, use the command

voidAttachShader( uintprogram,uintshader );

The error INVALIDOPERATIONis generated if shader is already attached to pro-gram.

Shader objects may be attached to program objects before source code has been loaded into the shader object, or before the shader object has been compiled. Multiple shader objects of the same type may be attached to a single program object, and a single shader object may be attached to more than one program object.

To detach a shader object from a program object, use the command

voidDetachShader( uintprogram,uintshader );

The error INVALIDOPERATIONis generated if shader is not attached to program. If shader has been flagged for deletion and is not attached to any other program object, it is deleted.

In order to use the shader objects contained in a program object, the program object must be linked. The command

voidLinkProgram( uintprogram );

will link the program object named program. Each program object has a boolean status, LINKSTATUS, that is modified as a result of linking. This status can be queried with GetProgramiv (see section 6.1.15). This status will be set to TRUEif a valid executable is created, and FALSEotherwise. Linking can fail for a variety of reasons as specified in the OpenGL Shading Language Specification. Linking will also fail if one or more of the shader objects, attached to program are not compiled successfully, or if more active uniform or active sampler variables are used in program than allowed (see section 2.20.3). If LinkProgram failed, any information about a previous link of that program object is lost. Thus, a failed link does not restore the old state of program.

2.20. VERTEX SHADERS

Each program object has an information log that is overwritten as a result of a link operation. This information log can be queried with GetProgramInfoLog to obtain more information about the link operation or the validation information (see section 6.1.15).

If a valid executable is created, it can be made part of the current rendering state with the command

voidUseProgram( uintprogram );

This command will install the executable code as part of current rendering state if the program object program contains valid executable code, i.e. has been linked successfully. If UseProgram is called with program set to 0, it is as if the GL had no programmable stages and the fixed-function paths will be used instead. If program has not been successfully linked, the error INVALIDOPERATIONis generated and the current rendering state is not modified.

While a program object is in use, applications are free to modify attached shader objects, compile attached shader objects, attach additional shader objects, and detach shader objects. These operations do not affect the link status or exe-cutable code of the program object.

If the program object that is in use is re-linked successfully, the LinkProgram command will install the generated executable code as part of the current rendering state if the specified program object was already in use as a result of a previous call to UseProgram.

If that program object that is in use is re-linked unsuccessfully, the link status will be set to FALSE, but existing executable and associated state will remain part of the current rendering state until a subsequent call to UseProgram removes it from use. After such a program is removed from use, it can not be made part of the current rendering state until it is successfully re-linked.

Program objects can be deleted with the command

voidDeleteProgram( uintprogram );

If program is not the current program for any GL context, it is deleted immediately. Otherwise, program is flagged for deletion and will be deleted when it is no longer the current program for any context. When a program object is deleted, all shader objects attached to it are detached. DeleteProgram will silently ignore the value zero.

2.20.3 Shader Variables

A vertex shader can reference a number of variables as it executes. Vertex attributes are the per-vertex values specified in section 2.7. Uniforms are per-program vari-

2.20. VERTEX SHADERS

ables that are constant during program execution. Samplers are a special form of uniform used for texturing (section 3.9). Varying variables hold the results of ver-tex shader execution that are used later in the pipeline. The following sections describe each of these variable types.

Vertex Attributes

Vertex shaders can access built-in vertex attribute variables corresponding to the per-vertex state set by commands such as Vertex, Normal, Color. Vertex shaders can also define named attribute variables, which are bound to the generic vertex attributes that are set by VertexAttrib*. This binding can be specified by the ap-plication before the program is linked, or automatically assigned by the GL when the program is linked.

When an attribute variable declared as a float, vec2, vec3or vec4is bound to a generic attribute index i, its value(s) are taken from the x, (x,y), (x,y,z), or (x,y,z,w)components, respectively, of the generic attribute i. When an attribute variable is declared as a mat2, mat3x2or mat4x2, its matrix columns are taken from the (x,y)components of generic attributes iand i+1(mat2), from attributes ithrough i+2(mat3x2), or from attributes ithrough i+3(mat4x2). When an attribute variable is declared as a mat2x3, mat3or mat4x3, its matrix columns are taken from the (x,y,z)components of generic attributes iand i+1(mat2x3), from attributes ithrough i+2(mat3), or from attributes ithrough i+3(mat4x3). When an attribute variable is declared as a mat2x4, mat3x4or mat4, its matrix columns are taken from the (x,y,z,w)components of generic attributes iand i+1(mat2x4), from attributes ithrough i+2(mat3x4), or from attributes ithrough i+3(mat4).

An attribute variable (either conventional or generic) is considered active if it is determined by the compiler and linker that the attribute may be accessed when the shader is executed. Attribute variables that are declared in a vertex shader but never used will not count against the limit. In cases where the compiler and linker cannot make a conclusive determination, an attribute will be considered active. A program object will fail to link if the sum of the active generic and active conventional attributes exceeds MAXVERTEXATTRIBS.

To determine the set of active vertex attributes used by a program, and to de-termine their types, use the command:

voidGetActiveAttrib( uintprogram,uintindex,sizeibufSize,sizei*length,int*size,enum*type,char*name );

2.20. VERTEX SHADERS

This command provides information about the attribute selected by index. An in-dex of 0 selects the first active attribute, and an index of ACTIVEATTRIBUTES1selects the last active attribute. The value of ACTIVEATTRIBUTEScan be queried with GetProgramiv (see section 6.1.15). If index is greater than or equal to ACTIVEATTRIBUTES, the error INVALIDVALUEis generated. Note that index simply identifies a member in a list of active attributes, and has no relation to the generic attribute that the corresponding variable is bound to.

The parameter program is the name of a program object for which the com-mand LinkProgram has been issued in the past. It is not necessary for program to have been linked successfully. The link could have failed because the number of active attributes exceeded the limit.

The name of the selected attribute is returned as a null-terminated string in name. The actual number of characters written into name, excluding the null termi-nator, is returned in length. If length is NULL, no length is returned. The maximum number of characters that may be written into name, including the null terminator, is specified by bufSize. The returned attribute name can be the name of a generic attribute or a conventional attribute (which begin with the prefix "gl", see the OpenGL Shading Language specification for a complete list). The length of the longest attribute name in program is given by ACTIVEATTRIBUTEMAXLENGTH, which can be queried with GetProgramiv (see section 6.1.15).

For the selected attribute, the type of the attribute is returned into
type. The size of the attribute is returned into size. The value in
size is in units of the type returned in type. The type returned can
be any of FLOAT, FLOATVEC2, FLOATVEC3, FLOATVEC4, FLOATMAT2,

FLOATMAT3, FLOATMAT4, FLOATMAT2x3, FLOATMAT2x4, FLOATMAT3x2, FLOATMAT3x4, FLOATMAT4x2, FLOATMAT4x3, INT, INTVEC2, INTVEC3, INTVEC4, UNSIGNEDINT, UNSIGNEDINTVEC2, UNSIGNEDINTVEC3, or UNSIGNEDINTVEC4.

If an error occurred, the return parameters length, size, type and name will be unmodified.

This command will return as much information about active attributes as pos-sible. If no information is available, length will be set to zero and name will be an empty string. This situation could arise if GetActiveAttrib is issued after a failed link.

After a program object has been linked successfully, the bindings of attribute variable names to indices can be queried. The command

intGetAttribLocation( uintprogram,constchar*name );

returns the generic attribute index that the attribute variable named name was bound

2.20. VERTEX SHADERS

to when the program object named program was last linked. name must be a null-terminated string. If name is active and is an attribute matrix, GetAttribLocation returns the index of the first column of that matrix. If program has not been suc-cessfully linked, the error INVALIDOPERATIONis generated. If name is not an active attribute, if name is a conventional attribute, or if an error occurs, -1 will be returned.

The binding of an attribute variable to a generic attribute index can also be specified explicitly. The command

voidBindAttribLocation( uintprogram,uintindex,constchar*name );

specifies that the attribute variable named name in program program should be bound to generic vertex attribute index when the program is next linked. If name was bound previously, its assigned binding is replaced with index. name must be a null terminated string. The error INVALIDVALUEis generated if index is equal or greater than MAXVERTEXATTRIBS. BindAttribLocation has no effect until the program is linked. In particular, it doesn’t modify the bindings of active attribute variables in a program that has already been linked.

Built-in attribute variables are automatically bound to conventional attributes, and can not have an assigned binding. The error INVALIDOPERATIONis gener-ated if name starts with the reserved "gl"prefix.

When a program is linked, any active attributes without a binding specified through BindAttribLocation will be automatically be bound to vertex attributes by the GL. Such bindings can be queried using the command GetAttribLocation. LinkProgram will fail if the assigned binding of an active attribute variable would cause the GL to reference a non-existant generic attribute (one greater than or equal to MAXVERTEXATTRIBS). LinkProgram will fail if the attribute bindings as-signed by BindAttribLocation do not leave not enough space to assign a location for an active matrix attribute, which requires multiple contiguous generic attributes. LinkProgram will also fail if the vertex shaders used in the program object contain assignments (not removed during pre-processing) to an attribute variable bound to generic attribute zero and to the conventional vertex position (glVertex).

BindAttribLocation may be issued before any vertex shader objects are at-tached to a program object. Hence it is allowed to bind any name (except a name starting with "gl") to an index, including a name that is never used as an attribute in any vertex shader object. Assigned bindings for attribute variables that do not exist or are not active are ignored.

The values of generic attributes sent to generic attribute index iare part of current state, just like the conventional attributes. If a new program object has

2.20. VERTEX SHADERS

been made active, then these values will be tracked by the GL in such a way that the same values will be observed by attributes in the new program object that are also bound to index i.

It is possible for an application to bind more than one attribute name to the same location. This is referred to as aliasing. This will only work if only one of the aliased attributes is active in the executable program, or if no path through the shader consumes more than one attribute of a set of attributes aliased to the same location. A link error can occur if the linker determines that every path through the shader consumes multiple aliased attributes, but implementations are not required to generate an error in this case. The compiler and linker are allowed to assume that no aliasing is done, and may employ optimizations that work only in the absence of aliasing. It is not possible to alias generic attributes with conventional ones.

Uniform Variables

Shaders can declare named uniform variables, as described in the OpenGL Shading Language Specification. Values for these uniforms are constant over a primitive, and typically they are constant across many primitives. Uniforms are program object-specific state. They retain their values once loaded, and their values are restored whenever a program object is used, as long as the program object has not been re-linked. A uniform is considered active if it is determined by the compiler and linker that the uniform will actually be accessed when the executable code is executed. In cases where the compiler and linker cannot make a conclusive determination, the uniform will be considered active.

The amount of storage available for uniform variables accessed by a vertex shader is specified by the implementation dependent constant MAXVERTEXUNIFORMCOMPONENTS. This value represents the number of indi-vidual floating-point, integer, or boolean values that can be held in uniform vari-able storage for a vertex shader. A uniform matrix will consume no more than 4× min(r,c)such values, where rand care the number of rows and columns in the matrix. A link error will be generated if an attempt is made to utilize more than the space available for vertex shader uniform variables.

When a program is successfully linked, all active uniforms belonging to the program object are initialized as defined by the version of the OpenGL Shading Language used to compile the program. A successful link will also generate a location for each active uniform. The values of active uniforms can be changed using this location and the appropriate Uniform* command (see below). These locations are invalidated and new ones assigned after each successful re-link.

To find the location of an active uniform variable within a program object, use the command

2.20. VERTEX SHADERS

intGetUniformLocation( uintprogram,constchar*name );

This command will return the location of uniform variable name. name must be a null terminated string, without white space. The value -1 will be returned if name does not correspond to an active uniform variable name in program or if name starts with the reserved prefix "gl". If program has not been successfully linked, the error INVALIDOPERATIONis generated. After a program is linked, the location of a uniform variable will not change, unless the program is re-linked.

A valid name cannot be a structure, an array of structures, or any portion of a single vector or a matrix. In order to identify a valid name, the "."(dot) and "[]"operators can be used in name to specify a member of a structure or element of an array.

The first element of a uniform array is identified using the name of the uniform array appended with "[0]". Except if the last part of the string name indicates a uniform array, then the location of the first element of that array can be retrieved by either using the name of the uniform array, or the name of the uniform array appended with "[0]".

To determine the set of active uniform attributes used by a program, and to determine their sizes and types, use the command:

voidGetActiveUniform( uintprogram,uintindex,sizeibufSize,sizei*length,int*size,enum*type,char*name );

This command provides information about the uniform selected by index. An in-dex of 0 selects the first active uniform, and an index of ACTIVEUNIFORMS1selects the last active uniform. The value of ACTIVEUNIFORMScan be queried with GetProgramiv (see section 6.1.15). If index is greater than or equal to ACTIVEUNIFORMS, the error INVALIDVALUEis generated. Note that index sim-ply identifies a member in a list of active uniforms, and has no relation to the location assigned to the corresponding uniform variable.

The parameter program is a name of a program object for which the command LinkProgram has been issued in the past. It is not necessary for program to have been linked successfully. The link could have failed because the number of active uniforms exceeded the limit.

If an error occurred, the return parameters length, size, type and name will be unmodified.

For the selected uniform, the uniform name is returned into name. The string name will be null terminated. The actual number of characters written into name,

2.20. VERTEX SHADERS

excluding the null terminator, is returned in length. If length is NULL, no length is returned. The maximum number of characters that may be written into name, in-cluding the null terminator, is specified by bufSize. The returned uniform name can be the name of built-in uniform state as well. The complete list of built-in uniform state is described in section 7.5 of the OpenGL Shading Language specification. The length of the longest uniform name in program is given by ACTIVEUNIFORMMAXLENGTH, which can be queried with GetProgramiv (see section 6.1.15).

Each uniform variable, declared in a shader, is broken down into one or more strings using the "."(dot) and "[]"operators, if necessary, to the point that it is legal to pass each string back into GetUniformLocation. Each of these strings constitutes one active uniform, and each string is assigned an index.

For the selected uniform, the type of the uniform is returned into type. The size of the uniform is returned into size. The value in size is in units of the type returned in type. The type returned can be any of FLOAT, FLOATVEC2, FLOATVEC3, FLOATVEC4, INT, INTVEC2, INTVEC3, INTVEC4, BOOL, BOOLVEC2, BOOLVEC3, BOOLVEC4, FLOATMAT2, FLOATMAT3, FLOATMAT4, FLOATMAT2x3, FLOATMAT2x4, FLOATMAT3x2, FLOATMAT3x4, FLOATMAT4x2, FLOATMAT4x3, SAMPLER1D, SAMPLER2D, SAMPLER3D, SAMPLERCUBE, SAMPLER1DSHADOW, SAMPLER2DSHADOW, SAMPLER1DARRAY, SAMPLER2DARRAY, SAMPLER1DARRAYSHADOW, SAMPLER2DARRAYSHADOW, SAMPLERCUBESHADOW, INTSAMPLER1D, INTSAMPLER2D, INTSAMPLER3D, INTSAMPLERCUBE, INTSAMPLER1DARRAY, INTSAMPLER2DARRAY, UNSIGNEDINT, UNSIGNEDINTVEC2, UNSIGNEDINTVEC3, UNSIGNEDINTVEC4, UNSIGNEDINTSAMPLER1D, UNSIGNEDINTSAMPLER2D, UNSIGNEDINTSAMPLER3D, UNSIGNEDINTSAMPLERCUBE, UNSIGNEDINTSAMPLER1DARRAY, or UNSIGNEDINTSAMPLER2DARRAY.

If one or more elements of an array are active, GetActiveUniform will return the name of the array in name, subject to the restrictions listed above. The type of the array is returned in type. The size parameter contains the highest array element index used, plus one. The compiler or linker determines the highest index used. There will be only one active uniform reported by the GL per uniform array.

GetActiveUniform will return as much information about active uniforms as possible. If no information is available, length will be set to zero and name will be an empty string. This situation could arise if GetActiveUniform is issued after a failed link.

To load values into the uniform variables of the program object that is currently in use, use the commands

Version 3.0 -August 11, 2008

2.20. VERTEX SHADERS

void Uniform{1234}{if}( intlocation,Tvalue );

void Uniform{1234}{if}v( intlocation,sizeicount,

Tvalue );voidUniform{1,2,3,4}ui( intlocation,Tvalue );voidUniform{1,2,3,4}uiv( intlocation,sizeicount,

Tvalue );voidUniformMatrix{234}fv( intlocation,sizeicount,booleantranspose,constfloat*value );

void UniformMatrix{2x3,3x2,2x4,4x2,3x4,4x3}fv( intlocation,sizeicount,booleantranspose,constfloat*value );

The given values are loaded into the uniform variable location identified by loca-tion.

The Uniform*f{v} commands will load count sets of one to four floating-point values into a uniform location defined as a float, a floating-point vector, an array of floats, or an array of floating-point vectors.

The Uniform*i{v} commands will load count sets of one to four integer val-ues into a uniform location defined as a sampler, an integer, an integer vector, an array of samplers, an array of integers, or an array of integer vectors. Only the Uniform1i{v} commands can be used to load sampler values (see below).

The Uniform*ui{v} commands will load count sets of one to four unsigned integer values into a uniform location defined as a unsigned integer, an unsigned integer vector, an array of unsigned integers or an array of unsigned integer vectors.

The UniformMatrix{234}fv commands will load count 2× 2, 3× 3, or 4× 4matrices (corresponding to 2, 3, or 4 in the command name) of floating-point values into a uniform location defined as a matrix or an array of matrices. If transpose is FALSE, the matrix is specified in column major order, otherwise in row major order.

The UniformMatrix{2x3,3x2,2x4,4x2,3x4,4x3}fv commands will load count 2×3, 3×2, 2×4, 4×2, 3×4, or 4×3matrices (corresponding to the numbers in the command name) of floating-point values into a uniform location defined as a matrix or an array of matrices. The first number in the command name is the number of columns; the second is the number of rows. For example, UniformMatrix2x4fv is used to load a matrix consisting of two columns and four rows. If transpose is FALSE, the matrix is specified in column major order, otherwise in row major order.

When loading values for a uniform declared as a boolean, a boolean vector, an array of booleans, or an array of boolean vectors, the Uniform*i{v}, Uni-form*ui{v}, and Uniform*f{v} set of commands can be used to load boolean

2.20. VERTEX SHADERS

values. Type conversion is done by the GL. The uniform is set to FALSEif the input value is 0 or 0.0f, and set to TRUEotherwise. The Uniform* command used must match the size of the uniform, as declared in the shader. For example, to load a uniform declared as a bvec2, any of the Uniform2{if ui}* commands may be used. An INVALIDOPERATIONerror will be generated if an attempt is made to use a non-matching Uniform* command. In this example using Uniform1iv would generate an error.

For all other uniform types the Uniform* command used must match the size and type of the uniform, as declared in the shader. No type conversions are done. For example, to load a uniform declared as a vec4, Uniform4f{v} must be used. To load a 3x3 matrix, UniformMatrix3fv must be used. An INVALIDOPERATIONerror will be generated if an attempt is made to use a non-matching Uniform* command. In this example, using Uniform4i{v} would generate an error.

When loading Nelements starting at an arbitrary position kin a uniform de-clared as an array, elements kthrough k+N1in the array will be replaced with the new values. Values for any array element that exceeds the highest array element index used, as reported by GetActiveUniform, will be ignored by the GL.

If the value of location is -1, the Uniform* commands will silently ignore the data passed in, and the current uniform values will not be changed.

If any of the following conditions occur, an INVALIDOPERATIONerror is gen-erated by the Uniform* commands, and no uniform values are changed:

if the size indicated in the name of the Uniform* command used does not

match the size of the uniform declared in the shader,

  • if the uniform declared in the shader is not of type boolean and the type indicated in the name of the Uniform* command used does not match the type of the uniform,

  • if count is greater than one, and the uniform declared in the shader is not an array variable,

  • if no variable with a location of location exists in the program object cur-rently in use and location is not -1, or

  • if there is no program object currently in use.

Samplers

Samplers are special uniforms used in the OpenGL Shading Language to identify the texture object used for each texture lookup. The value of a sampler indicates the texture image unit being accessed. Setting a sampler’s value to iselects texture

2.20. VERTEX SHADERS

image unit number i. The values of irange from zero to the implementation-dependent maximum supported number of texture image units.

The type of the sampler identifies the target on the texture image unit. The texture object bound to that texture image unit’s target is then used for the texture lookup. For example, a variable of type sampler2Dselects target TEXTURE2Don its texture image unit. Binding of texture objects to targets is done as usual with BindTexture. Selecting the texture image unit to bind to is done as usual with ActiveTexture.

The location of a sampler needs to be queried with GetUniformLocation, just like any uniform variable. Sampler values need to be set by calling Uniform1i{v}. Loading samplers with any of the other Uniform* entry points is not allowed and will result in an INVALIDOPERATIONerror.

It is not allowed to have variables of different sampler types pointing to the same texture image unit within a program object. This situation can only be de-tected at the next rendering command issued, and an INVALIDOPERATIONerror will then be generated.

Active samplers are samplers actually being used in a program object. The LinkProgram command determines if a sampler is active or not. The LinkPro-gram command will attempt to determine if the active samplers in the shader(s) contained in the program object exceed the maximum allowable limits. If it de-termines that the count of active samplers exceeds the allowable limits, then the link fails (these limits can be different for different types of shaders). Each active sampler variable counts against the limit, even if multiple samplers refer to the same texture image unit. If this cannot be determined at link time, for example if the program object only contains a vertex shader, then it will be determined at the next rendering command issued, and an INVALIDOPERATIONerror will then be generated.

Varying Variables

A vertex shader may define one or more varying variables (see the OpenGL Shad-ing Language specification). These values are expected to be interpolated across the primitive being rendered. The OpenGL Shading Language specification defines a set of built-in varying variables for vertex shaders that correspond to the values required for the fixed-function processing that occurs after vertex processing.

The number of interpolators available for processing varying vari-ables is given by the value of the implementation-dependent constant MAXVARYINGCOMPONENTS. This value represents the number of individual floating-point values that can be interpolated; varying variables declared as vec-tors, matrices, and arrays will all consume multiple interpolators. When a program

2.20. VERTEX SHADERS

is linked, all components of any varying variable written by a vertex shader, read by a fragment shader, or used for transform feedback will count against this limit. The transformed vertex position (glPosition) is not a varying variable and does not count against this limit. A program whose shaders access more than the value of MAXVARYINGCOMPONENTScomponents worth of varying variables may fail to link, unless device-dependent optimizations are able to make the program fit within available hardware resources.

Each program object can specify a set of one or more varying variables to be recorded in transform feedback mode with the command

voidTransformFeedbackVaryings( uintprogram,sizeicount,constchar**varyings,enumbufferMode );

program specifies the program object. count specifies the number of vary-ing variables used for transform feedback. varyings is an array of count zero-terminated strings specifying the names of the varying variables to use for transform feedback. The varying variables specified in varyings can be ei-ther built-in varying variables (beginning with "gl") or user-defined ones. varying variables are written out in the order they appear in the array vary-ings. bufferMode is either INTERLEAVEDATTRIBSor SEPARATEATTRIBS, and identifies the mode used to capture the varying variables when transform feedback is active. The error INVALIDVALUEis generated if program is not the name of a program object, or if bufferMode is SEPARATEATTRIBSand count is greater than the value of the implementation-dependent limit MAXTRANSFORMFEEDBACKSEPARATEATTRIBS.

The state set by TransformFeedbackVaryings has no effect on the execu-tion of the program until program is subsequently linked. When LinkProgram is called, the program is linked so that the values of the specified varying variables for the vertices of each primitive generated by the GL are written to a single buffer object (if the buffer mode is INTERLEAVEDATTRIBS) or multiple buffer objects (if the buffer mode is SEPARATEATTRIBS). A program will fail to link if:

  • the count specified by TransformFeedbackVaryings is non-zero, but the program object has no vertex shader;

  • any variable name specified in the varyings array is not declared as an output in the vertex shader.

  • any two entries in the varyings array specify the same varying variable;

  • the total number of components to capture in any varying variable in varyings is greater than the constant

2.20. VERTEX SHADERS

MAXTRANSFORMFEEDBACKSEPARATECOMPONENTSand the buffer mode is SEPARATEATTRIBS; or

the total number of components to capture is greater than the constant MAXTRANSFORMFEEDBACKINTERLEAVEDCOMPONENTSand the buffer mode is INTERLEAVEDATTRIBS.

To determine the set of varying variables in a linked program object that will be captured in transform feedback mode, the command:

voidGetTransformFeedbackVarying( uintprogram,uintindex,sizeibufSize,sizei*length,sizei*size,enum*type,char*name );

provides information about the varying variable selected by index. An index of 0 selects the first varying variable specified in the varyings array of Transform-FeedbackVaryings, and an index of TRANSFORMFEEDBACKVARYINGS-1 selects the last such varying variable. The value of TRANSFORMFEEDBACKVARYINGScan be queried with GetProgramiv (see section 6.1.15). If index is greater than or equal to TRANSFORMFEEDBACKVARYINGS, the error INVALIDVALUEis gener-ated. The parameter program is the name of a program object for which the com-mand LinkProgram has been issued in the past. If a new set of varying variables is specified by TransformFeedbackVaryings after a program object has been linked, the information returned by GetTransformFeedbackVarying will not reflect those variables until the program is re-linked.

The name of the selected varying is returned as a null-terminated string in name. The actual number of characters written into name, excluding the null terminator, is returned in length. If length is NULL, no length is returned. The maximum number of characters that may be written into name, including the null terminator, is specified by bufSize. The returned varying name can be the name of a user defined varying variable or the name of a built-in varying (which be-gin with the prefix gl, see the OpenGL Shading Language specification for a complete list). The length of the longest varying name in program is given by TRANSFORMFEEDBACKVARYINGMAXLENGTH, which can be queried with Get-Programiv (see section 6.1.15).

For the selected varying variable, its type is returned into type. The size of the varying is returned into size. The value in size is in units of the type returned in type. The type returned can be any of FLOAT, FLOATVEC2, FLOATVEC3, FLOATVEC4, INT, INTVEC2, INTVEC3, INTVEC4, UNSIGNEDINT, UNSIGNEDINTVEC2, UNSIGNEDINTVEC3, UNSIGNEDINTVEC4, FLOATMAT2, FLOATMAT3, or FLOATMAT4. If an error occurred, the return parameters length, size, type and

2.20. VERTEX SHADERS

name will be unmodified. This command will return as much information about the varying variables as possible. If no information is available, length will be set to zero and name will be an empty string. This situation could arise if GetTrans-formFeedbackVarying is called after a failed link.

2.20.4 Shader Execution

If a successfully linked program object that contains a vertex shader is made current by calling UseProgram, the executable version of the vertex shader is used to process incoming vertex values rather than the fixed-function vertex processing described in sections 2.12 through 2.19. In particular,

  • The model-view and projection matrices are not applied to vertex coordi-nates (section 2.12).

  • The texture matrices are not applied to texture coordinates (section 2.12.2).

  • Normals are not transformed to eye coordinates, and are not rescaled or nor-malized (section 2.12.3).

  • Normalization of AUTONORMALevaluated normals is not performed. (sec-tion 5.1).

  • Texture coordinates are not generated automatically (section 2.12.4).

  • Per vertex lighting is not performed (section 2.19.1).

  • Color material computations are not performed (section 2.19.3).

  • Color index lighting is not performed (section 2.19.5).

  • All of the above applies when setting the current raster position (section 2.18).

The following operations are applied to vertex values that are the result of executing the vertex shader:

  • Color clamping or masking (section 2.19.6).

  • Perspective division on clip coordinates (section 2.12).

  • Viewport mapping, including depth range scaling (section 2.12.1).

    • Clipping, including client-defined clip planes (section 2.17).

    • 2.20. VERTEX SHADERS
  • Front face determination (section 2.19.1).

  • Flat-shading (section 2.19.7).

  • Color, texture coordinate, fog, point-size and generic attribute clipping (section 2.19.8).

  • Final color processing (section 2.19.9.

There are several special considerations for vertex shader execution described in the following sections.

Shader Only Texturing

This section describes texture functionality that is only accessible through vertex or fragment shaders. Also refer to section 3.9 and to the OpenGL Shading Language Specification, section 8.7.

Additional OpenGL Shading Language texture lookup functions (see section

8.7 of the OpenGL Shading Language Specification) return either signed or un-signed integer values if the internal format of the texture is signed or unsigned, respectively.

Texel Fetches

The OpenGL Shading Language texel fetch functions provide the ability to extract a single texel from a specified texture image. The integer coordinates passed to the texel fetch functions are used directly as the texel coordinates (i,j,k)into the texture image. This in turn means the texture image is point-sampled (no filtering is performed).

The level of detail accessed is computed by adding the specified level-of-detail parameter lod to the base level of the texture, levelbase.

The texel fetch functions can not perform depth comparisons or access cube maps. Unlike filtered texel accesses, texel fetches do not support LOD clamping or any texture wrap mode, and require a mipmapped minification filter to access any level of detail other than the base level.

The results of the texel fetch are undefined if any of the following conditions hold:

  • the computed LOD is less than the texture’s base level (levelbase) or greater than the maximum level (levelmax)

    • the computed LOD is not the texture’s base level and the texture’s minifica-tion filter is NEARESTor LINEAR

    • 2.20. VERTEX SHADERS
  • the layer specified for array textures is negative or greater than the number of layers in the array texture,

  • the texel coordinates (i,j,k)refer to a border texel outside the defined ex-tents of the specified LOD, where any of

i< bs i ws bs j< bs j hs bs k< bs k ds bs

and the size parameters ws, hs, ds, and bsrefer to the width, height, depth, and border size of the image, as in equations 3.15

the texture being accessed is not complete (or cube complete for cubemaps).

Texture Size Query

The OpenGL Shading Language texture size functions provide the ability to query the size of a texture image. The LOD value lod passed in as an argument to the texture size functions is added to the levelbaseof the texture to determine a texture image level. The dimensions of that image level, excluding a possible border, are then returned. If the computed texture image level is outside the range [levelbase,levelmax], the results are undefined. When querying the size of an array texture, both the dimensions and the layer index are returned.

Texture Access

Vertex shaders have the ability to do a lookup into a texture map, if sup-ported by the GL implementation. The maximum number of texture image units available to a vertex shader is MAXVERTEXTEXTUREIMAGEUNITS; a maxi-mum number of zero indicates that the GL implemenation does not support texture accesses in vertex shaders. The maximum number of texture image units available to the fragment stage of the GL is MAXTEXTUREIMAGEUNITS. Both the vertex shader and fragment processing combined cannot use more than MAXCOMBINEDTEXTUREIMAGEUNITStexture image units. If both the vertex shader and the fragment processing stage access the same texture image unit, then that counts as using two texture image units against the MAXCOMBINEDTEXTUREIMAGEUNITSlimit.

When a texture lookup is performed in a vertex shader, the filtered texture value τis computed in the manner described in sections 3.9.7 and 3.9.8, and converted it to a texture source color Csaccording to table 3.23 (section 3.9.13). A four-component vector (Rs,Gs,Bs,As)is returned to the vertex shader.

2.20. VERTEX SHADERS

In a vertex shader, it is not possible to perform automatic level-of-detail calcu-lations using partial derivatives of the texture coordinates with respect to window coordinates as described in section 3.9.7. Hence, there is no automatic selection of an image array level. Minification or magnification of a texture map is controlled by a level-of-detail value optionally passed as an argument in the texture lookup functions. If the texture lookup function supplies an explicit level-of-detail value l, then the pre-bias level-of-detail value λbase(x,y)=l(replacing equation 3.16). If the texture lookup function does not supply an explicit level-of-detail value, then λbase(x,y)=0. The scale factor ρ(x,y)and its approximation function f(x,y)(see equation 3.20) are ignored.

Texture lookups involving textures with depth component data can either re-turn the depth data directly or return the results of a comparison with a refer-ence depth value specified in the coordinates passed to the texture lookup func-tion, as described in section 3.9.14. The comparison operation is requested in the shader by using any of the shadow sampler types and in the texture using the TEXTURECOMPAREMODEparameter. These requests must be consistent; the re-sults of a texture lookup are undefined if:

  • The sampler used in a texture lookup function is not one of the shadow sampler types, the texture object’s internal format is DEPTHCOMPONENTor DEPTHSTENCIL, and the TEXTURECOMPAREMODEis not NONE.

  • The sampler used in a texture lookup function is one of the shadow sam-pler types, the texture object’s internal format is DEPTHCOMPONENTor DEPTHSTENCIL, and the TEXTURECOMPAREMODEis NONE.

  • The sampler used in a texture lookup function is one of the shadow sampler types, and the texture object’s internal format is not DEPTHCOMPONENTor DEPTHSTENCIL.

The stencil index texture internal component is ignored if the base internal format is DEPTHSTENCIL.

If a vertex shader uses a sampler where the associated texture object is not com-plete, as defined in section 3.9.10, the texture image unit will return (R,G,B,A)= (0,0,0,1).

Shader Inputs

Besides having access to vertex attributes and uniform variables, vertex shaders can access the read-only built-in variable glVertexID. glVertexIDholds the inte-ger index iexplicitly passed to ArrayElement to specify the vertex, or implicitly

2.20. VERTEX SHADERS

passed by the DrawArrays, MultiDrawArrays, DrawElements, MultiDrawEle-ments, and DrawRangeElements commands. The value of glVertexIDis de-fined if and only if:

  • the vertex comes from a vertex array command that specifies a com-plete primitive (DrawArrays, MultiDrawArrays, DrawElements, Mul-tiDrawElements, or DrawRangeElements)

  • all enabled vertex arrays have non-zero buffer object bindings, and

  • the vertex does not come from a display list, even if the display list was compiled using one of the vertex array commands described above with data sourced from buffer objects.

Also see section 7.1 of the OpenGL Shading Language Specification.

Shader Outputs

A vertex shader can write to built-in as well as user-defined varying variables. These values are expected to be interpolated across the primitive it outputs, un-less they are specified to be flat shaded. Refer to section 2.19.7 and the OpenGL Shading Language specification sections 4.3.6, 7.1 and 7.6 for more detail.

The built-in output variables glFrontColor, glBackColor, glFrontSecondaryColor, and glBackSecondaryColorhold the front and back colors for the primary and secondary colors for the current vertex.

The built-in output variable glTexCoord[] is an array and holds the set of texture coordinates for the current vertex.

The built-in output variable glFogFragCoordis used as the cvalue described in section 3.11.

The built-in special variable glPositionis intended to hold the homoge-neous vertex position. Writing glPositionis optional.

The built-in special variable glClipVertexholds the vertex coordinate used in the clipping stage, as described in section 2.17.

The built in special variable glPointSize, if written, holds the size of the point to be rasterized, measured in pixels.

Position Invariance

If a vertex shader uses the built-in function ftransformto generate a vertex posi-tion, then this generally guarantees that the transformed position will be the same whether using this vertex shader or the fixed-function pipeline. This allows for cor-rect multi-pass rendering algorithms, where some passes use fixed-function vertex

2.20. VERTEX SHADERS

transformation and other passes use a vertex shader. If a vertex shader does not use ftransformto generate a position, transformed positions are not guaranteed to match, even if the sequence of instructions used to compute the position match the sequence of transformations described in section 2.12.

Validation

It is not always possible to determine at link time if a program object actually will execute. Therefore validation is done when the first rendering command is issued, to determine if the currently active program object can be executed. If it cannot be executed then no fragments will be rendered, and Begin, Raster-Pos, or any command that performs an implicit Begin will generate the error INVALIDOPERATION.

This error is generated by Begin, RasterPos, or any command that performs an implicit Begin if:

  • any two active samplers in the current program object are of different types, but refer to the same texture image unit,

  • any active sampler in the current program object refers to a texture image unit where fixed-function fragment processing accesses a texture target that does not match the sampler type, or

  • the sum of the number of active samplers in the program and the number of texture image units enabled for fixed-function fragment processing exceeds the combined limit on the total number of texture image units allowed.

Fixed-function fragment processing operations will be performed if the pro-gram object in use has no fragment shader.

The INVALIDOPERATIONerror reported by these rendering commands may not provide enough information to find out why the currently active program object would not execute. No information at all is available about a program object that would still execute, but is inefficient or suboptimal given the current GL state. As a development aid, use the command

voidValidateProgram( uintprogram );

to validate the program object program against the current GL state. Each program object has a boolean status, VALIDATESTATUS, that is modified as a result of validation. This status can be queried with GetProgramiv (see section 6.1.15). If validation succeeded this status will be set to TRUE, otherwise it will be set to

2.20. VERTEX SHADERS

FALSE. If validation succeeded the program object is guaranteed to execute, given the current GL state. If validation failed, the program object is guaranteed to not execute, given the current GL state.

ValidateProgram will check for all the conditions that could lead to an INVALIDOPERATIONerror when rendering commands are issued, and may check for other conditions as well. For example, it could give a hint on how to optimize some piece of shader code. The information log of program is overwritten with information on the results of the validation, which could be an empty string. The results written to the information log are typically only useful during application development; an application should not expect different GL implementations to produce identical information.

A shader should not fail to compile, and a program object should not fail to link due to lack of instruction space or lack of temporary variables. Implementa-tions should ensure that all valid shaders and program objects may be successfully compiled, linked and executed.

Undefined Behavior

When using array or matrix variables in a shader, it is possible to access a vari-able with an index computed at run time that is outside the declared extent of the variable. Such out-of-bounds reads will return undefined values; out-of-bounds writes will have undefined results and could corrupt other variables used by shader or the GL. The level of protection provided against such errors in the shader is implementation-dependent.

2.20.5 Required State

The GL maintains state to indicate which shader and program object names are in use. Initially, no shader or program objects exist, and no names are in use.

The state required per shader object consists of:

  • An unsigned integer specifying the shader object name.

  • An integer holding the value of SHADERTYPE.

  • A boolean holding the delete status, initially FALSE.

  • A boolean holding the status of the last compile, initially FALSE.

  • An array of type charcontaining the information log, initially empty.

    • An integer holding the length of the information log.

    • 2.20. VERTEX SHADERS
  • An array of type charcontaining the concatenated shader string, initially empty.

  • An integer holding the length of the concatenated shader string.

The state required per program object consists of:

  • An unsigned integer indicating the program object object name.

  • A boolean holding the delete status, initially FALSE.

  • A boolean holding the status of the last link attempt, initially FALSE.

  • A boolean holding the status of the last validation attempt, initally FALSE.

  • An integer holding the number of attached shader objects.

  • A list of unsigned integers to keep track of the names of the shader objects attached.

  • An array of type charcontaining the information log, initially empty.

  • An integer holding the length of the information log.

  • An integer holding the number of active uniforms.

  • For each active uniform, three integers, holding its location, size, and type, and an array of type charholding its name.

  • An array of words that hold the values of each active uniform.

  • An integer holding the number of active attributes.

  • For each active attribute, three integers holding its location, size, and type, and an array of type charholding its name.

Additional state required to support vertex shaders consists of:

  • A bit indicating whether or not vertex program two-sided color mode is en-abled, initially disabled.

  • A bit indicating whether or not vertex program point size mode (section 3.4.1) is enabled, initially disabled.

Additionally, one unsigned integer is required to hold the name of the current pro-gram object, if any.

Chapter 3

Rasterization

Rasterization is the process by which a primitive is converted to a two-dimensional image. Each point of this image contains such information as color and depth. Thus, rasterizing a primitive consists of two parts. The first is to determine which squares of an integer grid in window coordinates are occupied by the primitive. The second is assigning a depth value and one or more color values to each such square. The results of this process are passed on to the next stage of the GL (per-fragment operations), which uses the information to update the appropriate locations in the framebuffer. Figure 3.1 diagrams the rasterization process. The color values as-signed to a fragment are initially determined by the rasterization operations (sec-tions 3.4 through 3.8) and modified by either the execution of the texturing, color sum, and fog operations defined in sections 3.9, 3.10, and 3.11, or by a fragment shader as defined in section 3.12. The final depth value is initially determined by the rasterization operations and may be modified or replaced by a fragment shader. The results from rasterizing a point, line, polygon, pixel rectangle or bitmap can be routed through a fragment shader.

A grid square along with its parameters of assigned colors, z(depth), fog coor-dinate, and texture coordinates is called a fragment; the parameters are collectively dubbed the fragment’s associated data. A fragment is located by its lower left cor-ner, which lies on integer grid coordinates. Rasterization operations also refer to a fragment’s center, which is offset by (1/2,1/2)from its lower left corner (and so lies on half-integer coordinates).

Grid squares need not actually be square in the GL. Rasterization rules are not affected by the actual aspect ratio of the grid squares. Display of non-square grids, however, will cause rasterized points and line segments to appear fatter in one direction than the other. We assume that fragments are square, since it simplifies antialiasing and texturing.

111

3.1. DISCARDING PRIMITIVES BEFORE RASTERIZATION 113

Several factors affect rasterization. Primitives may be discarded before ras-terization. Lines and polygons may be stippled. Points may be given differing diameters and line segments differing widths. A point, line segment, or polygon may be antialiased.

3.1 Discarding Primitives Before Rasterization

Primitives can be optionally discarded before rasterization by calling Enable and Disable with RASTERIZERDISCARD. When enabled, primitives are discarded im-mediately before the rasterization stage, but after the optional transform feedback stage (see section 2.15). When disabled, primitives are passed through to the ras- terization stage to be processed normally. RASTERIZERDISCARDalso affects the DrawPixels, CopyPixels, Bitmap, Clear and Accum commands.

3.2 Invariance

Consider a primitive pobtained by translating a primitive pthrough an offset (x,y)in window coordinates, where xand yare integers. As long as neither pnor pis clipped, it must be the case that each fragment fproduced from pis identical to a corresponding fragment ffrom pexcept that the center of fis offset by (x,y)from the center of f.

3.3 Antialiasing

Antialiasing of a point, line, or polygon is effected in one of two ways depending on whether the GL is in RGBA or color index mode.

In RGBA mode, the R, G, and B values of the rasterized fragment are left unaffected, but the A value is multiplied by a floating-point value in the range [0,1]that describes a fragment’s screen pixel coverage. The per-fragment stage of the GL can be set up to use the A value to blend the incoming fragment with the corresponding pixel already present in the framebuffer.

In color index mode, the least significant bbits (to the left of the binary point) of the color index are used for antialiasing; b=min{4,m}, where mis the number of bits in the color index portion of the framebuffer. The antialiasing process sets these bbits based on the fragment’s coverage value: the bits are set to zero for no coverage and to all ones for complete coverage.

The details of how antialiased fragment coverage values are computed are dif-ficult to specify in general. The reason is that high-quality antialiasing may take

3.3. ANTIALIASING

into account perceptual issues as well as characteristics of the monitor on which the contents of the framebuffer are displayed. Such details cannot be addressed within the scope of this document. Further, the coverage value computed for a fragment of some primitive may depend on the primitive’s relationship to a num-ber of grid squares neighboring the one corresponding to the fragment, and not just on the fragment’s grid square. Another consideration is that accurate calculation of coverage values may be computationally expensive; consequently we allow a given GL implementation to approximate true coverage values by using a fast but not entirely accurate coverage computation.

In light of these considerations, we chose to specify the behavior of exact an-tialiasing in the prototypical case that each displayed pixel is a perfect square of uniform intensity. The square is called a fragment square and has lower left corner (x,y)and upper right corner (x+1,y+1). We recognize that this simple box filter may not produce the most favorable antialiasing results, but it provides a simple, well-defined model.

A GL implementation may use other methods to perform antialiasing, subject to the following conditions:

  1. If f1and f2are two fragments, and the portion of f1covered by some prim-itive is a subset of the corresponding portion of f2covered by the primitive, then the coverage computed for f1must be less than or equal to that com-puted for f2.

  2. The coverage computation for a fragment fmust be local: it may depend only on f’s relationship to the boundary of the primitive being rasterized. It may not depend on f’s xand ycoordinates.

Another property that is desirable, but not required, is:

3. The sum of the coverage values for all fragments produced by rasterizing a particular primitive must be constant, independent of any rigid motions in window coordinates, as long as none of those fragments lies along window edges.

In some implementations, varying degrees of antialiasing quality may be obtained by providing GL hints (section 5.6), allowing a user to make an image quality versus speed tradeoff.

3.3.1 Multisampling

Multisampling is a mechanism to antialias all GL primitives: points, lines, poly-gons, bitmaps, and images. The technique is to sample all primitives multiple times

3.3. ANTIALIASING

at each pixel. The color sample values are resolved to a single, displayable color each time a pixel is updated, so the antialiasing appears to be automatic at the application level. Because each sample includes color, depth, and stencil informa-tion, the color (including texture operation), depth, and stencil functions perform equivalently to the single-sample mode.

An additional buffer, called the multisample buffer, is added to the framebuffer. Pixel sample values, including color, depth, and stencil values, are stored in this buffer. Samples contain separate color values for each fragment color. When the framebuffer includes a multisample buffer, it does not include depth or sten-cil buffers, even if the multisample buffer does not store depth or stencil values. Color buffers (left, right, front, back, and aux) do coexist with the multisample buffer, however.

Multisample antialiasing is most valuable for rendering polygons, because it requires no sorting for hidden surface elimination, and it correctly handles adjacent polygons, object silhouettes, and even intersecting polygons. If only points or lines are being rendered, the “smooth” antialiasing mechanism provided by the base GL may result in a higher quality image. This mechanism is designed to allow multisample and smooth antialiasing techniques to be alternated during the rendering of a single scene.

If the value of SAMPLEBUFFERSis one, the rasterization of all primi-tives is changed, and is referred to as multisample rasterization. Otherwise, primitive rasterization is referred to as single-sample rasterization. The value of SAMPLEBUFFERSis queried by calling GetIntegerv with pname set to SAMPLEBUFFERS.

During multisample rendering the contents of a pixel fragment are changed in two ways. First, each fragment includes a coverage value with SAMPLESbits. The value of SAMPLESis an implementation-dependent constant, and is queried by calling GetIntegerv with pname set to SAMPLES.

Second, each fragment includes SAMPLESdepth values, color values, and sets of texture coordinates, instead of the single depth value, color value, and set of texture coordinates that is maintained in single-sample rendering mode. An imple-mentation may choose to assign the same color value and the same set of texture coordinates to more than one sample. The location for evaluating the color value and the set of texture coordinates can be anywhere within the pixel including the fragment center or any of the sample locations. The color value and the set of tex-ture coordinates need not be evaluated at the same location. Each pixel fragment thus consists of integer x and y grid coordinates, SAMPLEScolor and depth values, SAMPLESsets of texture coordinates, and a coverage value with a maximum of SAMPLESbits.

Multisample rasterization is enabled or disabled by calling Enable or Disable

3.4. POINTS 116

with the symbolic constant MULTISAMPLE.

If MULTISAMPLEis disabled, multisample rasterization of all primitives is equivalent to single-sample (fragment-center) rasterization, except that the frag-ment coverage value is set to full coverage. The color and depth values and the sets of texture coordinates may all be set to the values that would have been as-signed by single-sample rasterization, or they may be assigned as described below for multisample rasterization.

If MULTISAMPLEis enabled, multisample rasterization of all primitives differs substantially from single-sample rasterization. It is understood that each pixel in the framebuffer has SAMPLESlocations associated with it. These locations are exact positions, rather than regions or areas, and each is referred to as a sample point. The sample points associated with a pixel may be located inside or outside of the unit square that is considered to bound the pixel. Furthermore, the relative locations of sample points may be identical for each pixel in the framebuffer, or they may differ.

If the sample locations differ per pixel, they should be aligned to window, not screen, boundaries. Otherwise rendering results will be window-position specific. The invariance requirement described in section 3.2 is relaxed for all multisample rasterization, because the sample locations may be a function of pixel location.

It is not possible to query the actual sample locations of a pixel.

3.4 Points

If a vertex shader is not active, then the rasterization of points is controlled with

voidPointSize( floatsize );

size specifies the requested size of a point. The default value is 1.0. A value less than or equal to zero results in the error INVALIDVALUE.

The requested point size is multiplied with a distance attenuation factor, clamped to a specified point size range, and further clamped to the implementation-dependent point size range to produce the derived point size:

� �� �

1

derivedsize=clampsize× a + b d + c d2

where dis the eye-coordinate distance from the eye, (0,0,0,1)in eye coordinates, to the vertex, and a, b, and care distance attenuation function coefficients.

If multisampling is not enabled, the derived size is passed on to rasterization as the point width.

3.4. POINTS 117

If a vertex shader is active and vertex program point size mode is enabled, then the derived point size is taken from the (potentially clipped) shader built-in glPointSizeand clamped to the implementation-dependent point size range. If the value written to glPointSizeis less than or equal to zero, results are unde-fined. If a vertex shader is active and vertex program point size mode is disabled, then the derived point size is taken from the point size state as specified by the PointSize command. In this case no distance attenuation is performed. Vertex pro-gram point size mode is enabled and disabled by calling Enable or Disable with the symbolic value VERTEXPROGRAMPOINTSIZE.

If multisampling is enabled, an implementation may optionally fade the point alpha (see section 3.14) instead of allowing the point width to go below a given threshold. In this case, the width of the rasterized point is

width = derivedsizederivedsizethreshold (3.1)thresholdotherwise

and the fade factor is computed as follows:

fade = 1 2 derived size threshold (3.2)

derived size

otherwise

threshold

The distance attenuation function coefficients a, b, and c, the bounds of the first point size range clamp, and the point fade threshold, are specified with

void PointParameter{if}( enumpname,Tparam );

void PointParameter{if}v( enumpname,constTparams );

If pname is POINTSIZEMINor POINTSIZEMAX, then param speci-fies, or params points to the lower or upper bound respectively to which the derived point size is clamped. If the lower bound is greater than the upper bound, the point size after clamping is undefined. If pname is POINTDISTANCEATTENUATION, then params points to the coefficients a, b, and c. If pname is POINTFADETHRESHOLDSIZE, then param specifies, or params points to the point fade threshold. Values of POINTSIZEMIN, POINTSIZEMAX, or POINTFADETHRESHOLDSIZEless than zero result in the error INVALIDVALUE.

Point antialiasing is enabled or disabled by calling Enable or Disable with the symbolic constant POINTSMOOTH. The default state is for point antialiasing to be disabled.

Point sprites are enabled or disabled by calling Enable or Disable with the symbolic constant POINTSPRITE. The default state is for point sprites to be dis-

3.4. POINTS 118

abled. When point sprites are enabled, the state of the point antialiasing enable is ignored.

The point sprite texture coordinate replacement mode is set with one of the Tex-Env* commands described in section 3.9.13, where target is POINTSPRITEand pname is COORDREPLACE. The possible values for param are FALSEand TRUE. The default value for each texture coordinate set is for point sprite texture coordi-nate replacement to be disabled.

The point sprite texture coordinate origin is set with the PointParame-ter* commands where pname is POINTSPRITECOORDORIGINand param is LOWERLEFTor UPPERLEFT. The default value is UPPERLEFT.

3.4.1 Basic Point Rasterization

In the default state, a point is rasterized by truncating its xwand ywcoordinates (recall that the subscripts indicate that these are xand ywindow coordinates) to integers. This (x,y)address, along with data derived from the data associated with the vertex corresponding to the point, is sent as a single fragment to the per-fragment stage of the GL.

The effect of a point width other than 1.0depends on the state of point antialias-ing and point sprites. If antialiasing and point sprites are disabled, the actual width is determined by rounding the supplied width to the nearest integer, then clamp-ing it to the implementation-dependent maximum non-antialiased point width. This implementation-dependent value must be no less than the implementation-dependent maximum antialiased point width, rounded to the nearest integer value, and in any event no less than 1. If rounding the specified width results in the value 0, then it is as if the value were 1. If the resulting width is odd, then the point

11

(x,y)=(xw+2, yw+ 2)

is computed from the vertex’s xwand yw, and a square grid of the odd width cen-tered at (x,y)defines the centers of the rasterized fragments (recall that fragment centers lie at half-integer window coordinate values). If the width is even, then the center point is

11

(x,y)=(xw +2, yw +2);

the rasterized fragment centers are the half-integer window coordinate values within the square of the even width centered on (x,y). See figure 3.2.

3.4. POINTS 119

3.4. POINTS 120

Figure 3.3. Rasterization of antialiased wide points. The black dot indicates the point to be rasterized. The shaded region has the specified width. The X marks indicate those fragment centers produced by rasterization. A fragment’s computed coverage value is based on the portion of the shaded region that covers the corre-sponding fragment square. Solid lines lie on integer coordinates.

3.4. POINTS 121

All fragments produced in rasterizing a non-antialiased point are assigned the same associated data, which are those of the vertex corresponding to the point.

If antialiasing is enabled and point sprites are disabled, then point rasterization produces a fragment for each fragment square that intersects the region lying within the circle having diameter equal to the current point width and centered at the point’s (xw,yw)(figure 3.3). The coverage value for each fragment is the window coordinate area of the intersection of the circular region with the corresponding fragment square (but see section 3.3). This value is saved and used in the final step of rasterization (section 3.13). The data associated with each fragment are otherwise the data associated with the point being rasterized.

Not all widths need be supported when point antialiasing is on, but the width

1.0must be provided. If an unsupported width is requested, the nearest supported width is used instead. The range of supported widths and the width of evenly-spaced gradations within that range are implementation dependent. The range and gradations may be obtained using the query mechanism described in chapter 6. If, for instance, the width range is from 0.1 to 2.0 and the gradation width is 0.1, then the widths 0.1,0.2,...,1.9,2.0are supported.

If point sprites are enabled, then point rasterization produces a fragment for each framebuffer pixel whose center lies inside a square centered at the point’s (xw,yw), with side length equal to the current point size.

All fragments produced in rasterizing a point sprite are assigned the same as-sociated data, which are those of the vertex corresponding to the point. How-ever, for each texture coordinate set where COORDREPLACEis TRUE, these texture coordinates are replaced with point sprite texture coordinates. The scoordinate varies from 0 to 1 across the point horizontally left-to-right. If POINTSPRITECOORDORIGINis LOWERLEFT, the tcoordinate varies from 0 to 1 vertically bottom-to-top. Otherwise if the point sprite texture coordinate ori-gin is UPPERLEFT, the tcoordinate varies from 0 to 1 vertically top-to-bottom. The rand qcoordinates are replaced with the constants 0 and 1, respectively.

The following formula is used to evaluate the s and t coordinates:

1 2

xf +xw size

1

s =+

2

(3.3)


1 2

(yf+yw)size

1

+

, POINT SPRITE COORD ORIGIN = LOWER LEFT

2

t =

1 2

(yf+yw)size

1

2

, POINT SPRITE COORD ORIGIN = UPPER LEFT


(3.4)

where sizeis the point’s size, xfand yfare the (integral) window coordinates of the fragment, and xwand yware the exact, unrounded window coordinates of the

Version 3.0 -August 11, 2008

3.5. LINE SEGMENTS 122

vertex for the point.

The widths supported for point sprites must be a superset of those supported for antialiased points. There is no requirement that these widths must be equally spaced. If an unsupported width is requested, the nearest supported width is used instead.

3.4.2 Point Rasterization State

The state required to control point rasterization consists of the floating-point point width, three floating-point values specifying the minimum and maximum point size and the point fade threshold size, three floating-point values specifying the distance attenuation coefficients, a bit indicating whether or not antialiasing is enabled, a bit for the point sprite texture coordinate replacement mode for each texture coordinate set, and a bit for the point sprite texture coordinate origin.

3.4.3 Point Multisample Rasterization

If MULTISAMPLEis enabled, and the value of SAMPLEBUFFERSis one, then points are rasterized using the following algorithm, regardless of whether point antialias-ing (POINTSMOOTH) is enabled or disabled. Point rasterization produces a frag-ment for each framebuffer pixel with one or more sample points that intersect a region centered at the point’s (xw,yw). This region is a circle having diameter equal to the current point width if POINTSPRITEis disabled, or a square with side equal to the current point width if POINTSPRITEis enabled. Coverage bits that correspond to sample points that intersect the region are 1, other coverage bits are 0. All data associated with each sample for the fragment are the data associ-ated with the point being rasterized, with the exception of texture coordinates when POINTSPRITEis enabled; these texture coordinates are computed as described in section 3.4.

Point size range and number of gradations are equivalent to those supported for antialiased points when POINTSPRITEis disabled. The set of point sizes supported is equivalent to those for point sprites without multisample when POINTSPRITEis enabled.

3.5 Line Segments

A line segment results from a line strip Begin/End object, a line loop, or a se-ries of separate line segments. Line segment rasterization is controlled by several variables. Line width, which may be set by calling

3.5. LINE SEGMENTS

voidLineWidth( floatwidth );

with an appropriate positive floating-point width, controls the width of rasterized line segments. The default width is 1.0. Values less than or equal to 0.0generate the error INVALIDVALUE. Antialiasing is controlled with Enable and Disable using the symbolic constant LINESMOOTH. Finally, line segments may be stippled. Stippling is controlled by a GL command that sets a stipple pattern (see below).

3.5.1 Basic Line Segment Rasterization

Line segment rasterization begins by characterizing the segment as either x-major or y-major. x-major line segments have slope in the closed interval [1,1]; all other line segments are y-major (slope is determined by the segment’s endpoints). We shall specify rasterization only for x-major segments except in cases where the modifications for y-major segments are not self-evident.

Ideally, the GL uses a “diamond-exit” rule to determine those fragments that are produced by rasterizing a line segment. For each fragment fwith center at win-dow coordinates xfand yf, define a diamond-shaped region that is the intersection of four half planes:

Rf = { (x,y)||x xf | + |y yf | <1/2.}

Essentially, a line segment starting at paand ending at pbproduces those frag-ments ffor which the segment intersects Rf, except if pbis contained in Rf. See figure 3.4.

To avoid difficulties when an endpoint lies on a boundary of Rfwe (in princi-ple) perturb the supplied endpoints by a tiny amount. Let paand pbhave window coordinates (xa,ya)and (xb,yb), respectively. Obtain the perturbed endpoints pa given by (xa,ya)(�,�2)and pb given by (xb,yb)(�,�2). Rasterizing the line segment starting at paand ending at pbproduces those fragments ffor which the segment starting at pa and ending on pb intersects Rf, except if pb is contained in Rf. is chosen to be so small that rasterizing the line segment produces the same fragments when δis substituted for for any 0<δ.

When paand pblie on fragment centers, this characterization of fragments reduces to Bresenham’s algorithm with one modification: lines produced in this description are “half-open,” meaning that the final fragment (corresponding to pb) is not drawn. This means that when rasterizing a series of connected line segments, shared endpoints will be produced only once rather than twice (as would occur with Bresenham’s algorithm).

Because the initial and final conditions of the diamond-exit rule may be difficult to implement, other line segment rasterization algorithms are allowed, subject to the following rules:

3.5. LINE SEGMENTS

Figure 3.4. Visualization of Bresenham’s algorithm. A portion of a line segment is shown. A diamond shaped region of height 1 is placed around each fragment center; those regions that the line segment exits cause rasterization to produce correspond-ing fragments.

  1. The coordinates of a fragment produced by the algorithm may not deviate by more than one unit in either xor ywindow coordinates from a corresponding fragment produced by the diamond-exit rule.

  2. The total number of fragments produced by the algorithm may differ from that produced by the diamond-exit rule by no more than one.

  3. For an x-major line, no two fragments may be produced that lie in the same window-coordinate column (for a y-major line, no two fragments may ap-pear in the same row).

  4. If two line segments share a common endpoint, and both segments are either x-major (both left-to-right or both right-to-left) or y-major (both bottom-to-top or both top-to-bottom), then rasterizing both segments may not produce duplicate fragments, nor may any fragments be omitted so as to interrupt continuity of the connected segments.

Next we must specify how the data associated with each rasterized fragment are obtained. Let the window coordinates of a produced fragment center be given

3.5. LINE SEGMENTS

by pr=(xd,yd)and let pa=(xa,ya)and pb=(xb,yb). Set

t =(pr pa) · (pb pa) . (3.5)pb pa2

(Note that t=0at paand t=1at pb.) The value of an associated datum ffor the fragment, whether it be primary or secondary R, G, B, or A (in RGBA mode) or a color index (in color index mode), the fog coordinate, an s, t, r, or qtexture coordinate, or the clip wcoordinate, is found as

f = (1 t)fa/wa+tfb/wb(3.6)(1 t)/wa+t/wb

where faand fbare the data associated with the starting and ending endpoints of the segment, respectively; waand wbare the clip wcoordinates of the starting and ending endpoints of the segments, respectively. However, depth values for lines must be interpolated by

z = (1 t)za+tzb(3.7)

where zaand zbare the depth values of the starting and ending endpoints of the segment, respectively.

3.5.2 Other Line Segment Features

We have just described the rasterization of non-antialiased line segments of width one using the default line stipple of FFFF16. We now describe the rasterization of line segments for general values of the line segment rasterization parameters.

Line Stipple

The command

voidLineStipple( intfactor,ushortpattern );

defines a line stipple. pattern is an unsigned short integer. The line stipple is taken from the lowest order 16 bits of pattern. It determines those fragments that are to be drawn when the line is rasterized. factor is a count that is used to modify the effective line stipple by causing each bit in line stipple to be used factor times. factoris clamped to the range [1,256]. Line stippling may be enabled or disabled using Enable or Disable with the constant LINESTIPPLE. When disabled, it is as if the line stipple has its default value.

Line stippling masks certain fragments that are produced by rasterization so that they are not sent to the per-fragment stage of the GL. The masking is achieved

3.5. LINE SEGMENTS

using three parameters: the 16-bit line stipple p, the line repeat count r, and an integer stipple counter s. Let

b = s/rmod 16,

Then a fragment is produced if the bth bit of pis 1, and not produced otherwise. The bits of pare numbered with 0being the least significant and 15being the most significant. The initial value of sis zero; sis incremented after production of each fragment of a line segment (fragments are produced in order, beginning at the starting point and working towards the ending point). sis reset to 0 whenever a Begin occurs, and before every line segment in a group of independent segments (as specified when Begin is invoked with LINES).

If the line segment has been clipped, then the value of sat the beginning of the line segment is indeterminate.

Wide Lines

The actual width of non-antialiased lines is determined by rounding the supplied width to the nearest integer, then clamping it to the implementation-dependent maximum non-antialiased line width. This implementation-dependent value must be no less than the implementation-dependent maximum antialiased line width, rounded to the nearest integer value, and in any event no less than 1. If rounding the specified width results in the value 0, then it is as if the value were 1.

Non-antialiased line segments of width other than one are rasterized by off-setting them in the minor direction (for an x-major line, the minor direction is y, and for a y-major line, the minor direction is x) and replicating fragments in the minor direction (see figure 3.5). Let wbe the width rounded to the nearest integer (if w=0, then it is as if w=1). If the line segment has endpoints given by (x0,y0)and (x1,y1)in window coordinates, the segment with endpoints (x0,y0(w 1)/2)and (x1,y1(w 1)/2)is rasterized, but instead of a single fragment, a column of fragments of height w(a row of fragments of length wfor a y-major segment) is produced at each x(yfor y-major) location. The lowest fragment of this column is the fragment that would be produced by rasterizing the segment of width 1 with the modified coordinates. The whole column is not pro-duced if the stipple bit for the column’s xlocation is zero; otherwise, the whole column is produced.

Antialiasing

Rasterized antialiased line segments produce fragments whose fragment squares intersect a rectangle centered on the line segment. Two of the edges are parallel to

3.5. LINE SEGMENTS

the specified line segment; each is at a distance of one-half the current width from that segment: one above the segment and one below it. The other two edges pass through the line endpoints and are perpendicular to the direction of the specified line segment. Coverage values are computed for each fragment by computing the area of the intersection of the rectangle with the fragment square (see figure 3.6; see also section 3.3). Equation 3.6 is used to compute associated data values just as with non-antialiased lines; equation 3.5 is used to find the value of tfor each frag-ment whose square is intersected by the line segment’s rectangle. Not all widths need be supported for line segment antialiasing, but width 1.0antialiased segments must be provided. As with the point width, a GL implementation may be queried for the range and number of gradations of available antialiased line widths.

For purposes of antialiasing, a stippled line is considered to be a sequence of contiguous rectangles centered on the line segment. Each rectangle has width equal to the current line width and length equal to 1 pixel (except the last, which may be shorter). These rectangles are numbered from 0to n, starting with the rectangle incident on the starting endpoint of the segment. Each of these rectangles is ei-ther eliminated or produced according to the procedure given under Line Stipple, above, where “fragment” is replaced with “rectangle.” Each rectangle so produced

3.5. LINE SEGMENTS

is rasterized as if it were an antialiased polygon, described below (but culling, non-default settings of PolygonMode, and polygon stippling are not applied).

3.5.3 Line Rasterization State

The state required for line rasterization consists of the floating-point line width, a 16-bit line stipple, the line stipple repeat count, a bit indicating whether stippling is enabled or disabled, and a bit indicating whether line antialiasing is on or off. In addition, during rasterization, an integer stipple counter must be maintained to implement line stippling. The initial value of the line width is 1.0. The initial value of the line stipple is FFFF16(a stipple of all ones). The initial value of the line stipple repeat count is one. The initial state of line stippling is disabled. The initial state of line segment antialiasing is disabled.

3.5.4 Line Multisample Rasterization

If MULTISAMPLEis enabled, and the value of SAMPLEBUFFERSis one, then lines are rasterized using the following algorithm, regardless of whether line antialiasing (LINESMOOTH) is enabled or disabled. Line rasterization produces a fragment for each framebuffer pixel with one or more sample points that intersect the rectangular region that is described in the Antialiasing portion of section 3.5.2 (Other Line Segment Features). If line stippling is enabled, the rectangular region is subdivided

3.6. POLYGONS 129

into adjacent unit-length rectangles, with some rectangles eliminated according to the procedure given in section 3.5.2, where “fragment” is replaced by “rectangle”.

Coverage bits that correspond to sample points that intersect a retained rectan-gle are 1, other coverage bits are 0. Each color, depth, and set of texture coordinates is produced by substituting the corresponding sample location into equation 3.5, then using the result to evaluate equation 3.7. An implementation may choose to assign the same color value and the same set of texture coordinates to more than one sample by evaluating equation 3.5 at any location within the pixel including the fragment center or any one of the sample locations, then substituting into equation 3.6. The color value and the set of texture coordinates need not be evaluated at the same location.

Line width range and number of gradations are equivalent to those supported for antialiased lines.

3.6 Polygons

A polygon results from a polygon Begin/End object, a triangle resulting from a triangle strip, triangle fan, or series of separate triangles, or a quadrilateral arising from a quadrilateral strip, series of separate quadrilaterals, or a Rect command. Like points and line segments, polygon rasterization is controlled by several vari-ables. Polygon antialiasing is controlled with Enable and Disable with the sym-bolic constant POLYGONSMOOTH. The analog to line segment stippling for poly-gons is polygon stippling, described below.

3.6.1 Basic Polygon Rasterization

The first step of polygon rasterization is to determine if the polygon is back facing or front facing. This determination is made by examining the sign of the area com-puted by equation 2.6 of section 2.19.1 (including the possible reversal of this sign as indicated by the last call to FrontFace). If this sign is positive, the polygon is frontfacing; otherwise, it is back facing. This determination is used in conjunction with the CullFace enable bit and mode value to decide whether or not a particular polygon is rasterized. The CullFace mode is set by calling

voidCullFace( enummode );

mode is a symbolic constant: one of FRONT, BACKor FRONTANDBACK. Culling is enabled or disabled with Enable or Disable using the symbolic constant CULLFACE. Front facing polygons are rasterized if either culling is disabled or

3.6. POLYGONS

the CullFace mode is BACKwhile back facing polygons are rasterized only if ei-ther culling is disabled or the CullFace mode is FRONT. The initial setting of the CullFace mode is BACK. Initially, culling is disabled.

The rule for determining which fragments are produced by polygon rasteriza-tion is called point sampling. The two-dimensional projection obtained by taking the xand ywindow coordinates of the polygon’s vertices is formed. Fragment centers that lie inside of this polygon are produced by rasterization. Special treat-ment is given to a fragment whose center lies on a polygon boundary edge. In such a case we require that if two polygons lie on either side of a common edge (with identical endpoints) on which a fragment center lies, then exactly one of the polygons results in the production of the fragment during rasterization.

As for the data associated with each fragment produced by rasterizing a poly-gon, we begin by specifying how these values are produced for fragments in a triangle. Define barycentric coordinates for a triangle. Barycentric coordinates are a set of three numbers, a, b, and c, each in the range [0,1], with a+b+c=1. These coordinates uniquely specify any point pwithin the triangle or on the trian-gle’s boundary as

p=apa+bpb+cpc,

where pa, pb, and pcare the vertices of the triangle. a, b, and ccan be found as

A(ppbpc)A(ppapc)A(ppapb)

a = ,b = ,c = ,

A(papbpc)A(papbpc)A(papbpc)

where A(lmn)denotes the area in window coordinates of the triangle with vertices l, m, and n.

Denote an associated datum at pa, pb, or pcas fa, fb, or fc, respectively. Then the value fof a datum at a fragment produced by rasterizing a triangle is given by

f = afa/wa+bfb/wb+cfc/wc(3.8)

a/wa+b/wb+c/wcwhere wa, wband wcare the clip wcoordinates of pa, pb, and pc, respectively. a, b, and care the barycentric coordinates of the fragment for which the data are produced. a, b, and cmust correspond precisely to the exact coordinates of the center of the fragment. Another way of saying this is that the data associated with a fragment must be sampled at the fragment’s center. However, depth values for polygons must be interpolated by

z=aza+bzb+czc,

where za, zb, and zcare the depth values of pa, pb, and pc, respectively.

3.6. POLYGONS

For a polygon with more than three edges, we require only that a convex com-bination of the values of the datum at the polygon’s vertices can be used to obtain the value assigned to each fragment produced by the rasterization algorithm. That is, it must be the case that at every fragment

n

f=aifii=1

where nis the number of vertices in the polygon, fiis the value of the fat vertex

i; for each i0ai 1 and n =1. The values of the aimay differ from

i=1aifragment to fragment, but at vertex i, aj=0,j=iand ai=1. One algorithm that achieves the required behavior is to triangulate a polygon (without adding any vertices) and then treat each triangle individually as already discussed. A scan-line rasterizer that linearly interpolates data along each edge and then linearly interpolates data across each horizontal span from edge to edge also satisfies the restrictions (in this case, the numerator and denominator of equa-tion 3.8 should be iterated independently and a division performed for each fragment).

3.6.2 Stippling

Polygon stippling works much the same way as line stippling, masking out certain fragments produced by rasterization so that they are not sent to the next stage of the GL. This is the case regardless of the state of polygon antialiasing. Stippling is controlled with

voidPolygonStipple( ubyte*pattern );

pattern is a pointer to memory into which a 32× 32 pattern is packed. The pattern is unpacked from memory according to the procedure given in section 3.7.4 for DrawPixels; it is as if the height and width passed to that command were both equal to 32, the type were BITMAP, and the format were COLORINDEX. The unpacked values (before any conversion or arithmetic would have been performed) form a stipple pattern of zeros and ones.

If xwand yware the window coordinates of a rasterized polygon fragment, then that fragment is sent to the next stage of the GL if and only if the bit of the pattern (xwmod32,ywmod32)is 1.

Polygon stippling may be enabled or disabled with Enable or Disable using the constant POLYGONSTIPPLE. When disabled, it is as if the stipple pattern were all ones.

3.6. POLYGONS

3.6.3 Antialiasing

Polygon antialiasing rasterizes a polygon by producing a fragment wherever the interior of the polygon intersects that fragment’s square. A coverage value is com-puted at each such fragment, and this value is saved to be applied as described in section 3.13. An associated datum is assigned to a fragment by integrating the datum’s value over the region of the intersection of the fragment square with the polygon’s interior and dividing this integrated value by the area of the intersection. For a fragment square lying entirely within the polygon, the value of a datum at the fragment’s center may be used instead of integrating the value across the fragment.

Polygon stippling operates in the same way whether polygon antialiasing is enabled or not. The polygon point sampling rule defined in section 3.6.1, however, is not enforced for antialiased polygons.

3.6.4 Options Controlling Polygon Rasterization

The interpretation of polygons for rasterization is controlled using

voidPolygonMode( enumface,enummode );

face is one of FRONT, BACK, or FRONTANDBACK, indicating that the rasterizing method described by mode replaces the rasterizing method for front facing poly-gons, back facing polygons, or both front and back facing polygons, respectively. mode is one of the symbolic constants POINT, LINE, or FILL. Calling Polygon-Mode with POINTcauses certain vertices of a polygon to be treated, for rasteriza-tion purposes, just as if they were enclosed within a Begin(POINT) and End pair. The vertices selected for this treatment are those that have been tagged as having a polygon boundary edge beginning on them (see section 2.6.2). LINEcauses edges that are tagged as boundary to be rasterized as line segments. (The line stipple counter is reset at the beginning of the first rasterized edge of the polygon, but not for subsequent edges.) FILLis the default mode of polygon rasterization, cor-responding to the description in sections 3.6.1, 3.6.2, and 3.6.3. Note that these modes affect only the final rasterization of polygons: in particular, a polygon’s ver-tices are lit, and the polygon is clipped and possibly culled before these modes are applied.

Polygon antialiasing applies only to the FILLstate of PolygonMode. For POINTor LINE, point antialiasing or line segment antialiasing, respectively, ap-ply.

3.6. POLYGONS

3.6.5 Depth Offset

The depth values of all fragments generated by the rasterization of a polygon may be offset by a single value that is computed for that polygon. The function that determines this value is specified by calling

voidPolygonOffset( floatfactor,floatunits );

factor scales the maximum depth slope of the polygon, and units scales an im-plementation dependent constant that relates to the usable resolution of the depth buffer. The resulting values are summed to produce the polygon offset value. Both factor and units may be either positive or negative.

The maximum depth slope mof a triangle is

m =

∂zw

�2

+

∂zw

�2

∂xw∂ywwhere (xw,yw,zw)is a point on the triangle. mmay be approximated as

(3.9)

m = max

∂zw

∂xw

,

∂zw

∂yw

.

(3.10)

If the polygon has more than three vertices, one or more values of mmay be used during rasterization. Each may take any value in the range [min,max], where minand maxare the smallest and largest values obtained by evaluating equation 3.9 or equation 3.10 for the triangles formed by all three-vertex combinations.

The minimum resolvable difference ris an implementation-dependent param-eter that depends on the depth buffer representation. It is the smallest difference in window coordinate zvalues that is guaranteed to remain distinct throughout poly-gon rasterization and in the depth buffer. All pairs of fragments generated by the rasterization of two polygons with otherwise identical vertices, but zwvalues that differ by r, will have distinct depth values.

For fixed-point depth buffer representations, ris constant throughout the range of the entire depth buffer. For floating-point depth buffers, there is no single min-imum resolvable difference. In this case, the minimum resolvable difference for a given polygon is dependent on the maximum exponent, e, in the range of zvalues spanned by the primitive. If nis the number of bits in the floating-point mantissa, the minimum resolvable difference, r, for the given primitive is defined as

r =2en .

Version 3.0 -August 11, 2008

3.6. POLYGONS

The offset value ofor a polygon is

o = m × factor + r × units. (3.11)

mis computed as described above. If the depth buffer uses a fixed-point represen-tation, mis a function of depth values in the range [0,1], and ois applied to depth values in the same range.

Boolean state values POLYGONOFFSETPOINT, POLYGONOFFSETLINE, and POLYGONOFFSETFILLdetermine whether ois applied during the rasterization of polygons in POINT, LINE, and FILLmodes. These boolean state values are enabled and disabled as argument values to the commands Enable and Disable. If POLYGONOFFSETPOINTis enabled, ois added to the depth value of each frag-ment produced by the rasterization of a polygon in POINTmode. Likewise, if POLYGONOFFSETLINEor POLYGONOFFSETFILLis enabled, ois added to the depth value of each fragment produced by the rasterization of a polygon in LINEor FILLmodes, respectively.

For fixed-point depth buffers, fragment depth values are always limited to the range [0,1], either by clamping after offset addition is performed (preferred), or by clamping the vertex values used in the rasterization of the polygon. Frag-ment depth values are clamped even when the depth buffer uses a floating-point representation.

3.6.6 Polygon Multisample Rasterization

If MULTISAMPLEis enabled and the value of SAMPLEBUFFERSis one, then poly-gons are rasterized using the following algorithm, regardless of whether polygon antialiasing (POLYGONSMOOTH) is enabled or disabled. Polygon rasterization pro-duces a fragment for each framebuffer pixel with one or more sample points that satisfy the point sampling criteria described in section 3.6.1, including the special treatment for sample points that lie on a polygon boundary edge. If a polygon is culled, based on its orientation and the CullFace mode, then no fragments are pro-duced during rasterization. Fragments are culled by the polygon stipple just as they are for aliased and antialiased polygons.

Coverage bits that correspond to sample points that satisfy the point sampling criteria are 1, other coverage bits are 0. Each color, depth, and set of texture co-ordinates is produced by substituting the corresponding sample location into the barycentric equations described in section 3.6.1, using the approximation to equa- tion 3.8 that omits wcomponents. An implementation may choose to assign the same color value and the same set of texture coordinates to more than one sample by barycentric evaluation using any location with the pixel including the fragment

3.7. PIXEL RECTANGLES 135

center or one of the sample locations. The color value and the set of texture coor-dinates need not be evaluated at the same location.

The rasterization described above applies only to the FILLstate of Polygon-Mode. For POINTand LINE, the rasterizations described in sections 3.4.3 (Point Multisample Rasterization) and 3.5.4 (Line Multisample Rasterization) apply.

3.6.7 Polygon Rasterization State

The state required for polygon rasterization consists of a polygon stipple pattern, whether stippling is enabled or disabled, the current state of polygon antialiasing (enabled or disabled), the current values of the PolygonMode setting for each of front and back facing polygons, whether point, line, and fill mode polygon offsets are enabled or disabled, and the factor and bias values of the polygon offset equa-tion. The initial stipple pattern is all ones; initially stippling is disabled. The initial setting of polygon antialiasing is disabled. The initial state for PolygonMode is FILLfor both front and back facing polygons. The initial polygon offset factor and bias values are both 0; initially polygon offset is disabled for all modes.

3.7 Pixel Rectangles

Rectangles of color, depth, and certain other values may be converted to fragments using the DrawPixels command (described in section 3.7.4). Some of the param- eters and operations governing the operation of DrawPixels are shared by Read-Pixels (used to obtain pixel values from the framebuffer) and CopyPixels (used to copy pixels from one framebuffer location to another); the discussion of ReadPix-els and CopyPixels, however, is deferred until chapter 4 after the framebuffer has been discussed in detail. Nevertheless, we note in this section when parameters and state pertaining to DrawPixels also pertain to ReadPixels or CopyPixels.

A number of parameters control the encoding of pixels in buffer object or client memory (for reading and writing) and how pixels are processed before being placed in or after being read from the framebuffer (for reading, writing, and copying). These parameters are set with three commands: PixelStore, PixelTransfer, and PixelMap.

3.7.1 Pixel Storage Modes and Pixel Buffer Objects

Pixel storage modes affect the operation of DrawPixels and ReadPixels (as well as other commands; see sections 3.6.2, 3.8, and 3.9) when one of these commands is issued. This may differ from the time that the command is executed if the command is placed in a display list (see section 5.4). Pixel storage modes are set with

3.7. PIXEL RECTANGLES

Parameter Name Type Initial Value Valid Range
UNPACK SWAP BYTES boolean FALSE TRUE/FALSE
UNPACK LSB FIRST boolean FALSE TRUE/FALSE
UNPACK ROW LENGTH integer 0 [0, ∞)
UNPACK SKIP ROWS integer 0 [0, ∞)
UNPACK SKIP PIXELS integer 0 [0, ∞)
UNPACK ALIGNMENT integer 4 1,2,4,8
UNPACK IMAGE HEIGHT integer 0 [0, ∞)
UNPACK SKIP IMAGES integer 0 [0, ∞)

Table 3.1: PixelStore parameters pertaining to one or more of DrawPixels, Col-orTable, ColorSubTable, ConvolutionFilter1D, ConvolutionFilter2D, Separa-bleFilter2D, PolygonStipple, TexImage1D, TexImage2D, TexImage3D, Tex-SubImage1D, TexSubImage2D, and TexSubImage3D.

void PixelStore{if}( enumpname,Tparam );

pname is a symbolic constant indicating a parameter to be set, and param is the value to set it to. Table 3.1 summarizes the pixel storage parameters, their types, their initial values, and their allowable ranges. Setting a parameter to a value out-side the given range results in the error INVALIDVALUE.

The version of PixelStore that takes a floating-point value may be used to set any type of parameter; if the parameter is boolean, then it is set to FALSEif the passed value is 0.0and TRUEotherwise, while if the parameter is an integer, then the passed value is rounded to the nearest integer. The integer version of the command may also be used to set any type of parameter; if the parameter is boolean, then it is set to FALSEif the passed value is 0and TRUEotherwise, while if the parameter is a floating-point value, then the passed value is converted to floating-point.

In addition to storing pixel data in client memory, pixel data may also be stored in buffer objects (described in section 2.9). The current pixel un-pack and pack buffer objects are designated by the PIXELUNPACKBUFFERand PIXELPACKBUFFERtargets respectively.

Initially, zero is bound for the PIXELUNPACKBUFFER, indicating that image specification commands such as DrawPixels source their pixels from client mem-ory pointer parameters. However, if a non-zero buffer object is bound as the current pixel unpack buffer, then the pointer parameter is treated as an offset into the des-ignated buffer object.

3.7. PIXEL RECTANGLES

3.7.2 The Imaging Subset

Some pixel transfer and per-fragment operations are only made available in GL implementations which incorporate the optional imaging subset. The imaging subset includes both new commands, and new enumerants allowed as parame-ters to existing commands. If the subset is supported, all of these calls and enu-merants must be implemented as described later in the GL specification. If the subset is not supported, calling any unsupported command generates the error INVALIDOPERATION, and using any of the new enumerants generates the error INVALIDENUM.

The individual operations available only in the imaging subset are described in section 3.7.3. Imaging subset operations include:

  1. Color tables, including all commands and enumerants described in sub-sections Color Table Specification, Alternate Color Table Specification Commands, Color Table State and Proxy State, Color Table Lookup, Post Convolution Color Table Lookup, and Post Color Matrix Color Ta-ble Lookup, as well as the query commands described in section 6.1.7.

  2. Convolution, including all commands and enumerants described in sub-sections Convolution Filter Specification, Alternate Convolution Filter Specification Commands, and Convolution, as well as the query com-mands described in section 6.1.8.

  3. Color matrix, including all commands and enumerants described in subsec-tions Color Matrix Specification and Color Matrix Transformation, as well as the simple query commands described in section 6.1.6.

  4. Histogram and minmax, including all commands and enumerants described in subsections Histogram Table Specification, Histogram State and Proxy State, Histogram, Minmax Table Specification, and Minmax, as well as the query commands described in section 6.1.9 and section 6.1.10.

The imaging subset is supported only if the EXTENSIONSstring includes the substring "GLARBimaging"Querying EXTENSIONSis described in sec-tion 6.1.11.

If the imaging subset is not supported, the related pixel transfer operations are not performed; pixels are passed unchanged to the next operation.

3.7.3 Pixel Transfer Modes

Pixel transfer modes affect the operation of DrawPixels (section 3.7.4), ReadPix-els (section 4.3.2), and CopyPixels (section 4.3.3) at the time when one of these

3.7. PIXEL RECTANGLES

Parameter Name Type Initial Value Valid Range
MAP COLOR boolean FALSE TRUE/FALSE
MAP STENCIL boolean FALSE TRUE/FALSE
INDEX SHIFT integer 0 (−∞,)
INDEX OFFSET integer 0 (−∞,)
x SCALE float 1.0 (−∞,)
DEPTH SCALE float 1.0 (−∞,)
x BIAS float 0.0 (−∞,)
DEPTH BIAS float 0.0 (−∞,)
POST CONVOLUTION x SCALE float 1.0 (−∞,)
POST CONVOLUTION x BIAS float 0.0 (−∞,)
POST COLOR MATRIX x SCALE float 1.0 (−∞,)
POST COLOR MATRIX x BIAS float 0.0 (−∞,)

Table 3.2: PixelTransfer parameters. xis RED, GREEN, BLUE, or ALPHA.

commands is executed (which may differ from the time the command is issued). Some pixel transfer modes are set with

void PixelTransfer{if}( enumparam,Tvalue );

param is a symbolic constant indicating a parameter to be set, and value is the value to set it to. Table 3.2 summarizes the pixel transfer parameters that are set with PixelTransfer, their types, their initial values, and their allowable ranges. Setting a parameter to a value outside the given range results in the error INVALIDVALUE. The same versions of the command exist as for PixelStore, and the same rules apply to accepting and converting passed values to set parameters.

The pixel map lookup tables are set with

void PixelMap{ui us f}v( enummap,sizeisize,Tvalues );

map is a symbolic map name, indicating the map to set, size indicates the size of the map, and values refers to an array of size map values.

The entries of a table may be specified using one of three types: single-precision floating-point, unsigned short integer, or unsigned integer, depending on which of the three versions of PixelMap is called. A table entry is converted to the appropriate type when it is specified. An entry giving a color component value is converted according to table 2.10 and then clamped to the range [0,1]. An entry giving a color index value is converted from an unsigned short integer or unsigned

3.7. PIXEL RECTANGLES

Map Name Address Value Init. Size Init. Value
PIXEL MAP I TO I color idx color idx 1 0.0
PIXEL MAP S TO S stencil idx stencil idx 1 0
PIXEL MAP I TO R color idx R 1 0.0
PIXEL MAP I TO G color idx G 1 0.0
PIXEL MAP I TO B color idx B 1 0.0
PIXEL MAP I TO A color idx A 1 0.0
PIXEL MAP R TO R R R 1 0.0
PIXEL MAP G TO G G G 1 0.0
PIXEL MAP B TO B B B 1 0.0
PIXEL MAP A TO A A A 1 0.0

Table 3.3: PixelMap parameters.

integer to floating-point. An entry giving a stencil index is converted from single-precision floating-point to an integer by rounding to nearest. The various tables and their initial sizes and entries are summarized in table 3.3. A table that takes an index as an address must have size=2n or the error INVALIDVALUEresults. The maximum allowable size of each table is specified by the implementation de-pendent value MAXPIXELMAPTABLE, but must be at least 32 (a single maximum applies to all tables). The error INVALIDVALUEis generated if a size larger than the implemented maximum, or less than one, is given to PixelMap.

If a pixel unpack buffer is bound (as indicated by a non-zero value of PIXELUNPACKBUFFERBINDING), values is an offset into the pixel unpack buffer; otherwise, values is a pointer to client memory. All pixel storage and pixel transfer modes are ignored when specifying a pixel map. nmachine units are read where nis the size of the pixel map times the size of a float, uint, or ushortdatum in basic machine units, depending on the respective PixelMap version. If a pixel unpack buffer object is bound and data+nis greater than the size of the pixel buffer, an INVALIDOPERATIONerror results. If a pixel unpack buffer object is bound and values is not evenly divisible by the number of basic machine units needed to store in memory a float, uint, or ushortdatum depending on their respective PixelMap version, an INVALIDOPERATIONerror results.

Color Table Specification

Color lookup tables are specified with

3.7. PIXEL RECTANGLES

Table Name Type
COLOR TABLE POST CONVOLUTION COLOR TABLE POST COLOR MATRIX COLOR TABLE regular
PROXY COLOR TABLE PROXY POST CONVOLUTION COLOR TABLE PROXY POST COLOR MATRIX COLOR TABLE proxy

Table 3.4: Color table names. Regular tables have associated image data. Proxy tables have no image data, and are used only to determine if an image can be loaded into the corresponding regular table.

voidColorTable( enumtarget,enuminternalformat,sizeiwidth,enumformat,enumtype,void*data );

target must be one of the regular color table names listed in table 3.4 to de-fine the table. A proxy table name is a special case discussed later in this section. width, format, type, and data specify an image in memory with the same meaning and allowed values as the corresponding arguments to DrawPix-els (see section 3.7.4), with height taken to be 1. The maximum allowable width of a table is implementation-dependent, but must be at least 32. The formats COLORINDEX, DEPTHCOMPONENT, DEPTHSTENCIL, and STENCILINDEXand the type BITMAPare not allowed.

The specified image is taken from memory and processed just as if DrawPixels were called, stopping after the final expansion to RGBA. The R, G, B, and A com-ponents of each pixel are then scaled by the four COLORTABLESCALEparameters and biased by the four COLORTABLEBIASparameters. These parameters are set by calling ColorTableParameterfv as described below. If fragment color clamp-ing is enabled or internalformat is fixed-point, components are clamped to [0,1]. Otherwise, components are not modified.

Components are then selected from the resulting R, G, B, and A values to obtain a table with the base internal format specified by (or derived from) inter-nalformat, in the same manner as for textures (section 3.9.1). internalformat must be one of the formats in table 3.15 or tables 3.16- 3.18, with the exception of the RED, RG, DEPTHCOMPONENT, and DEPTHSTENCILbase and sized internal for-mats in those tables, all sized internal formats with non-fixed internal data types (see section 3.9), and sized internal format RGB9E5.

The color lookup table is redefined to have width entries, each with the speci-fied internal format. The table is formed with indices 0through width1. Table

3.7. PIXEL RECTANGLES

location iis specified by the ith image pixel, counting from zero.

The error INVALIDVALUEis generated if width is not zero or a non-negative power of two. The error TABLETOOLARGEis generated if the specified color lookup table is too large for the implementation.

The scale and bias parameters for a table are specified by calling

void ColorTableParameter{if}v( enumtarget,enumpname,Tparams );

target must be a regular color table name. pname is one of COLORTABLESCALEor COLORTABLEBIAS. params points to an array of four values: red, green, blue, and alpha, in that order.

A GL implementation may vary its allocation of internal component resolution based on any ColorTable parameter, but the allocation must not be a function of any other factor, and cannot be changed once it is established. Allocations must be invariant; the same allocation must be made each time a color table is specified with the same parameter values. These allocation rules also apply to proxy color tables, which are described later in this section.

Alternate Color Table Specification Commands

Color tables may also be specified using image data taken directly from the frame-buffer, and portions of existing tables may be respecified.

The command

voidCopyColorTable( enumtarget,enuminternalformat,intx,inty,sizeiwidth );

defines a color table in exactly the manner of ColorTable, except that table data are taken from the framebuffer, rather than from client memory. target must be a regular color table name. x, y, and width correspond precisely to the corresponding arguments of CopyPixels (refer to section 4.3.3); they specify the image’s width and the lower left (x,y)coordinates of the framebuffer region to be copied. The image is taken from the framebuffer exactly as if these arguments were passed to CopyPixels with argument type set to COLORand height set to 1, stopping after the final expansion to RGBA.

Subsequent processing is identical to that described for ColorTable, beginning with scaling by COLORTABLESCALE. Parameters target, internalformat and width are specified using the same values, with the same meanings, as the equivalent arguments of ColorTable. format is taken to be RGBA.

Two additional commands,

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES

voidColorSubTable( enumtarget,sizeistart,sizeicount,enumformat,enumtype,void*data );voidCopyColorSubTable( enumtarget,sizeistart,intx,inty,sizeicount );

respecify only a portion of an existing color table. No change is made to the inter-nalformat or width parameters of the specified color table, nor is any change made to table entries outside the specified portion. target must be a regular color table name.

ColorSubTable arguments format, type, and data match the corresponding ar-guments to ColorTable, meaning that they are specified using the same values, and have the same meanings. Likewise, CopyColorSubTable arguments x, y, and count match the x, y, and width arguments of CopyColorTable. Both of the Color-SubTable commands interpret and process pixel groups in exactly the manner of their ColorTable counterparts, except that the assignment of R, G, B, and A pixel group values to the color table components is controlled by the internalformat of the table, not by an argument to the command.

Arguments start and count of ColorSubTable and CopyColorSubTable spec-ify a subregion of the color table starting at index start and ending at index start+count1. Counting from zero, the nth pixel group is assigned to the table entry with index count+n. The error INVALIDVALUEis generated if start+count>width.

Calling CopyColorTable or CopyColorSubTable will result in an INVALIDFRAMEBUFFEROPERATIONerror if the object bound to READFRAMEBUFFERBINDINGis not framebuffer complete (see section 4.4.4).

Color Table State and Proxy State

The state necessary for color tables can be divided into two categories. For each of the three tables, there is an array of values. Each array has associated with it a width, an integer describing the internal format of the table, six integer values describing the resolutions of each of the red, green, blue, alpha, luminance, and intensity components of the table, and two groups of four floating-point numbers to store the table scale and bias. Each initial array is null (zero width, internal format RGBA, with zero-sized components). The initial value of the scale parameters is (1,1,1,1) and the initial value of the bias parameters is (0,0,0,0).

In addition to the color lookup tables, partially instantiated proxy color lookup tables are maintained. Each proxy table includes width and internal format state values, as well as state for the red, green, blue, alpha, luminance, and intensity component resolutions. Proxy tables do not include image data, nor do they in-

3.7. PIXEL RECTANGLES

clude scale and bias parameters. When ColorTable is executed with target speci-fied as one of the proxy color table names listed in table 3.4, the proxy state values of the table are recomputed and updated. If the table is too large, no error is gener-ated, but the proxy format, width and component resolutions are set to zero. If the color table would be accommodated by ColorTable called with target set to the corresponding regular table name (COLORTABLEis the regular name correspond-ing to PROXYCOLORTABLE, for example), the proxy state values are set exactly as though the regular table were being specified. Calling ColorTable with a proxy target has no effect on the image or state of any actual color table.

There is no image associated with any of the proxy targets. They cannot be used as color tables, and they must never be queried using GetColorTable. The error INVALIDENUMis generated if this is attempted.

Convolution Filter Specification

A two-dimensional convolution filter image is specified by calling

voidConvolutionFilter2D( enumtarget,enuminternalformat,sizeiwidth,sizeiheight,enumformat,enumtype,void*data );

target must be CONVOLUTION2D. width, height, format, type, and data specify an image in memory with the same meaning and allowed values as the correspond-ing parameters to DrawPixels. The formats COLORINDEX, DEPTHCOMPONENT, DEPTHSTENCIL, and STENCILINDEXand the type BITMAPare not allowed.

The specified image is extracted from memory and processed just as if DrawPixels were called, stopping after the final expansion to RGBA. The R, G, B, and A components of each pixel are then scaled by the four two-dimensional CONVOLUTIONFILTERSCALEparameters and biased by the four two-dimensional CONVOLUTIONFILTERBIASparameters. These parameters are set by calling ConvolutionParameterfv as described below. No clamping takes place at any time during this process.

Components are then selected from the resulting R, G, B, and A values to obtain a table with the base internal format specified by (or derived from) internal-format, in the same manner as for textures (section 3.9.1). internalformat accepts the same values as the corresponding argument of ColorTable.

The red, green, blue, alpha, luminance, and/or intensity components of the pixels are stored in floating point, rather than integer format. They form a two-dimensional image indexed with coordinates i,jsuch that iincreases from left to right, starting at zero, and jincreases from bottom to top, also starting at zero.

3.7. PIXEL RECTANGLES

Image location i,jis specified by the Nth pixel, counting from zero, where

N = i + j width

The error INVALIDVALUEis generated if width or height is greater than the maximum supported value. These values are queried with Get-ConvolutionParameteriv, setting target to CONVOLUTION2Dand pname to MAXCONVOLUTIONWIDTHor MAXCONVOLUTIONHEIGHT, respectively.

The scale and bias parameters for a two-dimensional filter are specified by calling

void ConvolutionParameter{if}v( enumtarget,enumpname,Tparams );

with target CONVOLUTION2D. pname is one of CONVOLUTIONFILTERSCALEor CONVOLUTIONFILTERBIAS. params points to an array of four values: red, green, blue, and alpha, in that order.

A one-dimensional convolution filter is defined using

voidConvolutionFilter1D( enumtarget,enuminternalformat,sizeiwidth,enumformat,enumtype,void*data );

target must be CONVOLUTION1D. internalformat, width, format, and type have identical semantics and accept the same values as do their two-dimensional coun-terparts. data must point to a one-dimensional image, however.

The image is extracted from memory and processed as if ConvolutionFilter2D were called with a height of 1, except that it is scaled and biased by the one-dimensional CONVOLUTIONFILTERSCALEand CONVOLUTIONFILTERBIAS

parameters. These parameters are specified exactly as the two-dimensional
parameters, except that ConvolutionParameterfv is called with target
CONVOLUTION1D.

The image is formed with coordinates isuch that iincreases from left to right, starting at zero. Image location iis specified by the ith pixel, counting from zero.

The error INVALIDVALUEis generated if width is greater than the maximum supported value. This value is queried using GetConvolutionParameteriv, setting target to CONVOLUTION1Dand pname to MAXCONVOLUTIONWIDTH.

Special facilities are provided for the definition of two-dimensional sepa-rable filters – filters whose image can be represented as the product of two one-dimensional images, rather than as full two-dimensional images. A two-dimensional separable convolution filter is specified with

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES

voidSeparableFilter2D( enumtarget,enuminternalformat,sizeiwidth,sizeiheight,enumformat,enumtype,void*row,void*column );

target must be SEPARABLE2D. internalformat specifies the formats of the table entries of the two one-dimensional images that will be retained. row points to a width pixel wide image of the specified format and type. column points to a height pixel high image, also of the specified format and type.

The two images are extracted from memory and processed as if Convolu-tionFilter1D were called separately for each, except that each image is scaled and biased by the two-dimensional separable CONVOLUTIONFILTERSCALEand CONVOLUTIONFILTERBIASparameters. These parameters are specified exactly as the one-dimensional and two-dimensional parameters, except that Convolution-Parameteriv is called with target SEPARABLE2D.

Alternate Convolution Filter Specification Commands

One and two-dimensional filters may also be specified using image data taken di-rectly from the framebuffer.

The command

voidCopyConvolutionFilter2D( enumtarget,enuminternalformat,intx,inty,sizeiwidth,sizeiheight );

defines a two-dimensional filter in exactly the manner of ConvolutionFilter2D, except that image data are taken from the framebuffer, rather than from client mem-ory. target must be CONVOLUTION2D. x, y, width, and height correspond precisely to the corresponding arguments of CopyPixels (refer to section 4.3.3); they specify the image’s width and height, and the lower left (x,y)coordinates of the frame-buffer region to be copied. The image is taken from the framebuffer exactly as if these arguments were passed to CopyPixels with argument type set to COLOR, stopping after the final expansion to RGBA.

Subsequent processing is identical to that described for ConvolutionFilter2D, beginning with scaling by CONVOLUTIONFILTERSCALE. Parameters target, in-ternalformat, width, and height are specified using the same values, with the same meanings, as the equivalent arguments of ConvolutionFilter2D. format is taken to be RGBA.

The command

voidCopyConvolutionFilter1D( enumtarget,enuminternalformat,intx,inty,sizeiwidth );

3.7. PIXEL RECTANGLES

defines a one-dimensional filter in exactly the manner of ConvolutionFilter1D, except that image data are taken from the framebuffer, rather than from client mem-ory. target must be CONVOLUTION1D. x, y, and width correspond precisely to the corresponding arguments of CopyPixels (refer to section 4.3.3); they specify the image’s width and the lower left (x,y)coordinates of the framebuffer region to be copied. The image is taken from the framebuffer exactly as if these arguments were passed to CopyPixels with argument type set to COLORand height set to 1, stopping after the final expansion to RGBA.

Subsequent processing is identical to that described for ConvolutionFilter1D, beginning with scaling by CONVOLUTIONFILTERSCALE. Parameters target, in-ternalformat, and width are specified using the same values, with the same mean-ings, as the equivalent arguments of ConvolutionFilter2D. format is taken to be RGBA.

Calling CopyConvolutionFilter1D or CopyConvolutionFilter2D will re-sult in an INVALIDFRAMEBUFFEROPERATIONerror if the object bound to READFRAMEBUFFERBINDINGis not framebuffer complete (see section 4.4.4).

Convolution Filter State

The required state for convolution filters includes a one-dimensional image array, two one-dimensional image arrays for the separable filter, and a two-dimensional image array. Each filter has associated with it a width and height (two-dimensional and separable only), an integer describing the internal format of the filter, and two groups of four floating-point numbers to store the filter scale and bias.

Each initial convolution filter is null (zero width and height, internal format RGBA, with zero-sized components). The initial value of all scale parameters is (1,1,1,1) and the initial value of all bias parameters is (0,0,0,0).

Color Matrix Specification

Setting the matrix mode to COLORcauses the matrix operations described in sec-tion 2.12.2 to apply to the top matrix on the color matrix stack. All matrix opera-tions have the same effect on the color matrix as they do on the other matrices.

Histogram Table Specification

The histogram table is specified with

voidHistogram( enumtarget,sizeiwidth,enuminternalformat,booleansink );

3.7. PIXEL RECTANGLES

target must be HISTOGRAMif a histogram table is to be specified. target value PROXYHISTOGRAMis a special case discussed later in this section. width speci-fies the number of entries in the histogram table, and internalformat specifies the format of each table entry. The maximum allowable width of the histogram table is implementation-dependent, but must be at least 32. sink specifies whether pixel groups will be consumed by the histogram operation (TRUE) or passed on to the minmax operation (FALSE).

If no error results from the execution of Histogram, the specified histogram table is redefined to have width entries, each with the specified internal format. The entries are indexed 0 through width1. Each component in each entry is set to zero. The values in the previous histogram table, if any, are lost.

The error INVALIDVALUEis generated if width is not zero or a non-negative power of two. The error TABLETOOLARGEis generated if the specified histogram table is too large for the implementation. internalformat accepts the same values as the corresponding argument of ColorTable, with the exception of the values 1, 2, 3, and 4.

A GL implementation may vary its allocation of internal component resolution based on any Histogram parameter, but the allocation must not be a function of any other factor, and cannot be changed once it is established. In particular, allocations must be invariant; the same allocation must be made each time a histogram is specified with the same parameter values. These allocation rules also apply to the proxy histogram, which is described later in this section.

Histogram State and Proxy State

The state necessary for histogram operation is an array of values, with which is associated a width, an integer describing the internal format of the histogram, five integer values describing the resolutions of each of the red, green, blue, alpha, and luminance components of the table, and a flag indicating whether or not pixel groups are consumed by the operation. The initial array is null (zero width, internal format RGBA, with zero-sized components). The initial value of the flag is false.

In addition to the histogram table, a partially instantiated proxy histogram table is maintained. It includes width, internal format, and red, green, blue, alpha, and luminance component resolutions. The proxy table does not include image data or the flag. When Histogram is executed with target set to PROXYHISTOGRAM, the proxy state values are recomputed and updated. If the histogram array is too large, no error is generated, but the proxy format, width, and component resolutions are set to zero. If the histogram table would be accomodated by Histogram called with target set to HISTOGRAM, the proxy state values are set exactly as though the actual histogram table were being specified. Calling Histogram with target

3.7. PIXEL RECTANGLES

PROXY HISTOGRAM has no effect on the actual histogram table.

There is no image associated with PROXYHISTOGRAM. It cannot be used as a histogram, and its image must never queried using GetHistogram. The error INVALIDENUMresults if this is attempted.

Minmax Table Specification

The minmax table is specified with

voidMinmax( enumtarget,enuminternalformat,booleansink );

target must be MINMAX. internalformat specifies the format of the table entries. sink specifies whether pixel groups will be consumed by the minmax operation (TRUE) or passed on to final conversion (FALSE).

internalformat accepts the same values as the corresponding argument of Col-orTable, with the exception of the values 1, 2, 3, and 4, as well as the INTENSITYbase and sized internal formats. The resulting table always has 2 entries, each with values corresponding only to the components of the internal format.

The state necessary for minmax operation is a table containing two elements (the first element stores the minimum values, the second stores the maximum val-ues), an integer describing the internal format of the table, and a flag indicating whether or not pixel groups are consumed by the operation. The initial state is a minimum table entry set to the maximum representable value and a maximum table entry set to the minimum representable value. Internal format is set to RGBAand the initial value of the flag is false.

3.7.4 Rasterization of Pixel Rectangles

The process of drawing pixels encoded in buffer object or client memory is dia-grammed in figure 3.7. We describe the stages of this process in the order in which they occur.

Pixels are drawn using

voidDrawPixels( sizeiwidth,sizeiheight,enumformat,enumtype,void*data );

format is a symbolic constant indicating what the values in memory represent. width and height are the width and height, respectively, of the pixel rectan-gle to be drawn. data refers to the data to be drawn. The correspon-dence between the type token values and the GL data types they indicate is

3.7. PIXEL RECTANGLES

3.7. PIXEL RECTANGLES

given in table 3.5. If the GL is in color index mode and format is not one of COLORINDEX, STENCILINDEX, DEPTHCOMPONENT, or DEPTHSTENCIL, then the error INVALIDOPERATIONoccurs. Results of rasterization are undefined if any of the selected draw buffers of the draw framebuffer have an integer format and no fragment shader is active. If format contains integer components, as shown in table 3.6, an INVALIDOPERATIONerror is generated. If type is BITMAPand format is not COLORINDEXor STENCILINDEXthen the error INVALIDENUMoccurs. If format is DEPTHSTENCILand type is not UNSIGNEDINT248or FLOAT32UNSIGNEDINT248REV, then the error INVALIDENUMoccurs. If format is one of the integer component formats as defined in table 3.6 and type is FLOAT, the error INVALIDENUMoccurs. Some additional constraints on the combinations of format and type values that are accepted are discussed below.

Calling DrawPixels will result in an INVALIDFRAMEBUFFEROPERATIONer-ror if the object bound to DRAWFRAMEBUFFERBINDINGis not framebuffer com-plete (see section 4.4.4).

Unpacking

Data are taken from the currently bound pixel unpack buffer or client memory as a sequence of signed or unsigned bytes (GL data types byteand ubyte), signed or unsigned short integers (GL data types shortand ushort), signed or unsigned integers (GL data types intand uint), or floating point values (GL data types halfand float). These elements are grouped into sets of one, two, three, or four values, depending on the format, to form a group. Table 3.6 summarizes the format of groups obtained from memory; it also indicates those formats that yield indices and those that yield floating-point or integer components.

If a pixel unpack buffer is bound (as indicated by a non-zero value of PIXELUNPACKBUFFERBINDING), data is an offset into the pixel unpack buffer and the pixels are unpacked from the buffer relative to this offset; otherwise, data is a pointer to client memory and the pixels are unpacked from client memory relative to the pointer. If a pixel unpack buffer object is bound and unpacking the pixel data according to the process described below would access memory beyond the size of the pixel unpack buffer’s memory size, an INVALIDOPERATIONerror results. If a pixel unpack buffer object is bound and data is not evenly divisible by the number of basic machine units needed to store in memory the corresponding GL data type from table 3.5 for the type parameter, an INVALIDOPERATIONerror results.

By default the values of each GL data type are interpreted as they would be specified in the language of the client’s GL binding. If UNPACKSWAPBYTESis enabled, however, then the values are interpreted with the bit orderings modified as per table 3.7. The modified bit orderings are defined only if the GL data type

3.7. PIXEL RECTANGLES

type Parameter Token Name Corresponding GL Data Type Special Interpretation
UNSIGNED BYTE ubyte No
BITMAP ubyte Yes
BYTE byte No
UNSIGNED SHORT ushort No
SHORT short No
UNSIGNED INT uint No
INT int No
HALF FLOAT half No
FLOAT float No
UNSIGNED BYTE 3 3 2 ubyte Yes
UNSIGNED BYTE 2 3 3 REV ubyte Yes
UNSIGNED SHORT 5 6 5 ushort Yes
UNSIGNED SHORT 5 6 5 REV ushort Yes
UNSIGNED SHORT 4 4 4 4 ushort Yes
UNSIGNED SHORT 4 4 4 4 REV ushort Yes
UNSIGNED SHORT 5 5 5 1 ushort Yes
UNSIGNED SHORT 1 5 5 5 REV ushort Yes
UNSIGNED INT 8 8 8 8 uint Yes
UNSIGNED INT 8 8 8 8 REV uint Yes
UNSIGNED INT 10 10 10 2 uint Yes
UNSIGNED INT 2 10 10 10 REV uint Yes
UNSIGNED INT 24 8 uint Yes
UNSIGNED INT 10F 11F 11F REV uint Yes
UNSIGNED INT 5 9 9 9 REV uint Yes
FLOAT 32 UNSIGNED INT 24 8 REV n/a Yes

Table 3.5: DrawPixels and ReadPixels type parameter values and the correspond-ing GL data types. Refer to table 2.2 for definitions of GL data types. Special interpretations are described near the end of section 3.7.4.

3.7. PIXEL RECTANGLES

Format Name Element Meaning and Order Target Buffer
COLOR INDEX Color Index Color
STENCIL INDEX Stencil Index Stencil
DEPTH COMPONENT Depth Depth
DEPTH STENCIL Depth and Stencil Index Depth and Stencil
RED R Color
GREEN G Color
BLUE B Color
ALPHA A Color
RG R, G Color
RGB R, G, B Color
RGBA R, G, B, A Color
BGR B, G, R Color
BGRA B, G, R, A Color
LUMINANCE Luminance Color
LUMINANCE ALPHA Luminance, A Color
RED INTEGER iR Color
GREEN INTEGER iG Color
BLUE INTEGER iB Color
ALPHA INTEGER iA Color
RG INTEGER iR, iG Color
RGB INTEGER iR, iG, iB Color
RGBA INTEGER iR, iG, iB, iA Color
BGR INTEGER iB, iG, iR Color
BGRA INTEGER iB, iG, iR, iA Color

Table 3.6: DrawPixels and ReadPixels formats. The second column gives a de-scription of and the number and order of elements in a group. Unless specified as an index, formats yield components. Components are floating-point unless prefixed with the letter ’i’, which indicates they are integer.

3.7. PIXEL RECTANGLES

Element Size Default Bit Ordering Modified Bit Ordering
8 bit [7..0][7..0]
16 bit [15..0][7..0][15..8]
32 bit [31..0][7..0][15..8][23..16][31..24]

Table 3.7: Bit ordering modification of elements when UNPACKSWAPBYTESis enabled. These reorderings are defined only when GL data type ubytehas 8 bits, and then only for GL data types with 8, 16, or 32 bits. Bit 0 is the least significant.

ubyte has eight bits, and then for each specific GL data type only if that type is represented with 8, 16, or 32 bits.

The groups in memory are treated as being arranged in a rectangle. This rectangle consists of a series of rows, with the first element of the first group of the first row pointed to by the pointer passed to DrawPixels. If the value of UNPACKROWLENGTHis not positive, then the number of groups in a row is width; otherwise the number of groups is UNPACKROWLENGTH. If pindicates the loca-tion in memory of the first element of the first row, then the first element of the Nth

row is indicated by
p + Nk (3.12)
where N is the row number (counting from zero) and k is defined as
k = nla/ssnl/as ≥ a, s < a (3.13)

where nis the number of elements in a group, lis the number of groups in the row, ais the value of UNPACKALIGNMENT, and sis the size, in units of GL ubytes, of an element. If the number of bits per element is not 1, 2, 4, or 8times the number of bits in a GL ubyte, then k=nlfor all values of a.

There is a mechanism for selecting a sub-rectangle of groups from a larger containing rectangle. This mechanism relies on three integer parameters: UNPACKROWLENGTH, UNPACKSKIPROWS, and UNPACKSKIPPIXELS. Before obtaining the first group from memory, the pointer supplied to DrawPixels is effec-tively advanced by (UNPACKSKIPPIXELS)n+(UNPACKSKIPROWS)kelements. Then width groups are obtained from contiguous elements in memory (without ad-vancing the pointer), after which the pointer is advanced by kelements. height sets of width groups of values are obtained this way. See figure 3.8.

Calling DrawPixels with a type matching one of the types in table 3.8 is a special case in which all the components of each group are packed into a sin-

3.7. PIXEL RECTANGLES

gle unsigned byte, unsigned short, or unsigned int, depending on the type. If type is FLOAT32UNSIGNEDINT248REV, the components of each group are two 32-bit words; the first word contains the float component, and the second word contains packed 24-bit and 8-bit components. The number of components per packed pixel is fixed by the type, and must match the number of compo-nents per group indicated by the format parameter, as listed in table 3.8. The error INVALIDOPERATIONis generated if a mismatch occurs. This constraint also holds for all other functions that accept or return pixel data using type and format parameters to define the type and format of that data.

Bitfield locations of the first, second, third, and fourth components of each packed pixel type are illustrated in tables 3.9, 3.10, and 3.11. Each bitfield is interpreted as an unsigned integer value. If the base GL type is supported with more than the minimum precision (e.g. a 9-bit byte) the packed components are right-justified in the pixel.

Components are normally packed with the first component in the most signif-icant bits of the bitfield, and successive component occupying progressively less significant locations. Types whose token names end with REVreverse the compo-nent packing order from least to most significant locations. In all cases, the most significant bit of each component is packed in the most significant bit location of its location in the bitfield.

3.7. PIXEL RECTANGLES

type Parameter Token Name GL Data Type Number of Components Matching Pixel Formats
UNSIGNED BYTE 3 3 2 ubyte 3 RGB
UNSIGNED BYTE 2 3 3 REV ubyte 3 RGB
UNSIGNED SHORT 5 6 5 ushort 3 RGB
UNSIGNED SHORT 5 6 5 REV ushort 3 RGB
UNSIGNED SHORT 4 4 4 4 ushort 4 RGBA,BGRA
UNSIGNED SHORT 4 4 4 4 REV ushort 4 RGBA,BGRA
UNSIGNED SHORT 5 5 5 1 ushort 4 RGBA,BGRA
UNSIGNED SHORT 1 5 5 5 REV ushort 4 RGBA,BGRA
UNSIGNED INT 8 8 8 8 uint 4 RGBA,BGRA
UNSIGNED INT 8 8 8 8 REV uint 4 RGBA,BGRA
UNSIGNED INT 10 10 10 2 uint 4 RGBA,BGRA
UNSIGNED INT 2 10 10 10 REV uint 4 RGBA,BGRA
UNSIGNED INT 24 8 uint 2 DEPTH STENCIL
UNSIGNED INT 10F 11F 11F REV uint 3 RGB
UNSIGNED INT 5 9 9 9 REV uint 4 RGB
FLOAT 32 UNSIGNED INT 24 8 REV n/a 2 DEPTH STENCIL

Table 3.8: Packed pixel formats.

3.7. PIXEL RECTANGLES

UNSIGNEDBYTE332:

76543210

UNSIGNEDBYTE233REV:

76543210

Table 3.9: UNSIGNEDBYTEformats. Bit numbers are indicated for each compo-nent.

3.7. PIXEL RECTANGLES

UNSIGNEDSHORT565:

1514131211109876543210

UNSIGNEDSHORT565REV:

1514131211109876543210

UNSIGNEDSHORT4444:

1514131211109876543210

UNSIGNEDSHORT4444REV:

1514131211109876543210

UNSIGNEDSHORT5551:

1514131211109876543210

UNSIGNEDSHORT1555REV:

1514131211109876543210

Table 3.10: UNSIGNEDSHORTformats

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES 158

UNSIGNEDINT8888:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT8888REV:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT1010102:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT2101010REV:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT248:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT10F11F11FREV:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

UNSIGNEDINT5999REV:

313029282726252423222120191817161514131211109 8 7 6 5 4 3 2 1 0

Table 3.11: UNSIGNEDINTformats

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES

Format First Component Second Component Third Component Fourth Component
RGB red green blue
RGBA red green blue alpha
BGRA blue green red alpha
DEPTH STENCIL depth stencil

Table 3.12: Packed pixel field assignments.

The assignment of component to fields in the packed pixel is as described in table 3.12.

Byte swapping, if enabled, is performed before the component are extracted from each pixel. The above discussions of row length and image extraction are valid for packed pixels, if “group” is substituted for “component” and the number of components per group is understood to be one.

Calling DrawPixels with a type of UNSIGNEDINT10F11F11FREVand for-mat of RGBis a special case in which the data are a series of GL uintvalues. Each uint value specifies 3 packed components as shown in table 3.11. The 1st, 2nd, and 3rd components are called fred(11 bits), fgreen(11 bits), and fblue(10 bits) re-spectively.

fredand fgreenare treated as unsigned 11-bit floating-point values and con-verted to floating-point red and green components respectively as described in sec-tion 2.1.3. fblueis treated as an unsigned 10-bit floating-point value and converted to a floating-point blue component as described in section 2.1.4.

Calling DrawPixels with a type of UNSIGNEDINT5999REVand format of RGBis a special case in which the data are a series of GL uintvalues. Each uintvalue specifies 4 packed components as shown in table 3.11. The 1st, 2nd, 3rd, and 4th components are called pred, pgreen, pblue, and pexprespectively and are treated as unsigned integers. These are then used to compute floating-point RGBcomponents (ignoring the ”Conversion to floating-point” section below in this case) as follows:

red = pred2pexpBN green = pgreen2pexpBN

blue=pblue2pexpBN

where B=15(the exponent bias) and N=9(the number of mantissa bits).

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES

Calling DrawPixels with a type of BITMAPis a special case in which the data are a series of GL ubytevalues. Each ubytevalue specifies 8 1-bit elements with its 8 least-significant bits. The 8 single-bit elements are ordered from most significant to least significant if the value of UNPACKLSBFIRSTis FALSE; other-wise, the ordering is from least significant to most significant. The values of bits other than the 8 least significant in each ubyteare not significant.

The first element of the first row is the first bit (as defined above) of the ubytepointed to by the pointer passed to DrawPixels. The first element of the second row is the first bit (again as defined above) of the ubyteat location p+k, where kis computed as

l

k = a (3.14)

8a

There is a mechanism for selecting a sub-rectangle of elements from a BITMAPimage as well. Before obtaining the first element from memory, the pointer sup-plied to DrawPixels is effectively advanced by UNPACKSKIPROWSkubytes. Then UNPACKSKIPPIXELS1-bit elements are ignored, and the subsequent width 1-bit elements are obtained, without advancing the ubytepointer, after which the pointer is advanced by kubytes. height sets of width elements are obtained this way.

Conversion to floating-point

This step applies only to groups of floating-point components. It is not performed on indices or integer components. For groups containing both components and indices, such as DEPTHSTENCIL, the indices are not converted.

Each element in a group is converted to a floating-point value according to the appropriate formula in table 2.10 (section 2.19). For packed pixel types, each ele- ment in the group is converted by computing c/(2N 1), where cis the unsigned integer value of the bitfield containing the element and Nis the number of bits in the bitfield.

Conversion to RGB

This step is applied only if the format is LUMINANCEor LUMINANCEALPHA. If the format is LUMINANCE, then each group of one element is converted to a group of R, G, and B (three) elements by copying the original single element into each of the three new elements. If the format is LUMINANCEALPHA, then each group of two elements is converted to a group of R, G, B, and A (four) elements by copying

3.7. PIXEL RECTANGLES

the first original element into each of the first three new elements and copying the second original element to the A (fourth) new element.

Final Expansion to RGBA

This step is performed only for non-depth component groups. Each group is con-verted to a group of 4 elements as follows: if a group does not contain an A ele-ment, then A is added and set to 1 for integer components or 1.0 for floating-point components. If any of R, G, or B is missing from the group, each missing element is added and assigned a value of 0 for integer components or 0.0 for floating-point components.

Pixel Transfer Operations

This step is actually a sequence of steps. Because the pixel transfer operations are performed equivalently during the drawing, copying, and reading of pixels, and during the specification of texture images (either from memory or from the framebuffer), they are described separately in section 3.7.5. After the processing described in that section is completed, groups are processed as described in the following sections.

Final Conversion

For a color index, final conversion consists of masking the bits of the index to the left of the binary point by 2n 1, where nis the number of bits in an index buffer.

For integer RGBA components, no conversion is performed. For floating-point RGBA components, if fragment color clamping is enabled, each element is clamped to [0,1], and may be converted to fixed-point according to the rules given in section 2.19.9. If fragment color clamping is disabled, RGBA compo-nents are unmodified. Fragment color clamping is controlled using ClampColor, as described in section 2.19.6, with a target of CLAMPFRAGMENTCOLOR.

For a depth component, an element is processed according to the depth buffer’s representation. For fixed-point depth buffers, the element is first clamped to the range [0,1]and then converted to fixed-point as if it were a window zvalue (see section 2.12.1). Conversion is not necessary when the depth buffer uses a floating- point representation, but clamping is.

Stencil indices are masked by 2n 1, where nis the number of bits in the stencil buffer.

The state required for fragment color clamping is a three-valued integer. The initial value of fragment color clamping is FIXEDONLY.

Version 3.0 -August 11, 2008

3.7. PIXEL RECTANGLES

Conversion to Fragments

The conversion of a group to fragments is controlled with

voidPixelZoom( floatzx,floatzy);

Let (xrp,yrp)be the current raster position (section 2.18). (If the current raster position is invalid, then DrawPixels is ignored; pixel transfer operations do not update the histogram or minmax tables, and no fragments are generated. However, the histogram and minmax tables are updated even if the corresponding fragments are later rejected by the pixel ownership (section 4.1.1) or scissor (section 4.1.2) tests.) If a particular group (index or components) is the nth in a row and belongs to the mth row, consider the region in window coordinates bounded by the rectangle with corners

(xrp+zxn,yrp+zym)and (xrp+zx(n+1),yrp+zy(m+1))

(either zxor zymay be negative). A fragment representing group (n,m)is pro-duced for each framebuffer pixel inside, or on the bottom or left boundary, of this rectangle

A fragment arising from a group consisting of color data takes on the color index or color components of the group and the current raster position’s associated depth value, while a fragment arising from a depth component takes that compo-nent’s depth value and the current raster position’s associated color index or color components. In both cases, the fog coordinate is taken from the current raster posi-tion’s associated raster distance, the secondary color is taken from the current raster position’s associated secondary color, and texture coordinates are taken from the current raster position’s associated texture coordinates. Groups arising from Draw-Pixels with a format of DEPTHSTENCILor STENCILINDEXare treated specially and are described in section 4.3.1.

3.7.5 Pixel Transfer Operations

The GL defines six kinds of pixel groups:

  1. Floating-point RGBA component: Each group comprises four color compo-nents in floating-point format: red, green, blue, and alpha.

  2. Integer RGBA component: Each group comprises four color components in integer format: red, green, blue, and alpha.

    1. Depth component: Each group comprises a single depth component.

    2. 3.7. PIXEL RECTANGLES
  3. Color index: Each group comprises a single color index.

  4. Stencil index: Each group comprises a single stencil index.

  5. Depth/stencil: Each group comprises a single depth component and a single stencil index.

Each operation described in this section is applied sequentially to each pixel group in an image. Many operations are applied only to pixel groups of certain kinds; if an operation is not applicable to a given group, it is skipped. None of the operations defined in this section affect integer RGBA component pixel groups.

Arithmetic on Components

This step applies only to RGBA component and depth component groups, and to the depth components in depth/stencil groups. Each component is multiplied by an appropriate signed scale factor: REDSCALEfor an R component, GREENSCALEfor a G component, BLUESCALEfor a B component, and ALPHASCALEfor an A component, or DEPTHSCALEfor a depth component. Then the result is added to the appropriate signed bias: REDBIAS, GREENBIAS, BLUEBIAS, ALPHABIAS, or DEPTHBIAS.

Arithmetic on Indices

This step applies only to color index and stencil index groups, and to the stencil indices in depth/stencil groups. If the index is a floating-point value, it is converted to fixed-point, with an unspecified number of bits to the right of the binary point and at least log2(MAXPIXELMAPTABLE)bits to the left of the binary point. Indices that are already integers remain so; any fraction bits in the resulting fixed-point value are zero.

The fixed-point index is then shifted by |INDEX SHIFT| bits, left if INDEXSHIFT>0and right otherwise. In either case the shift is zero-filled. Then, the signed integer offset INDEXOFFSETis added to the index.

RGBA to RGBA Lookup

This step applies only to RGBA component groups, and is skipped if MAPCOLORis FALSE. First, each component is clamped to the range [0,1]. There is a table associ-ated with each of the R, G, B, and A component elements: PIXELMAPRTORfor R, PIXELMAPGTOGfor G, PIXELMAPBTOBfor B, and PIXELMAPATOAfor A. Each element is multiplied by an integer one less than the size of the corre-sponding table, and, for each element, an address is found by rounding this value

3.7. PIXEL RECTANGLES

to the nearest integer. For each element, the addressed value in the corresponding table replaces the element.

Color Index Lookup

This step applies only to color index groups. If the GL command that invokes the pixel transfer operation requires that RGBA component pixel groups be generated, then a conversion is performed at this step. RGBA component pixel groups are required if

  1. The groups will be rasterized, and the GL is in RGBA mode, or

  2. The groups will be loaded as an image into texture memory, or

  3. The groups will be returned to client memory with a format other than COLORINDEX.

If RGBA component groups are required, then the integer part of the in-dex is used to reference 4 tables of color components: PIXELMAPITOR, PIXELMAPITOG, PIXELMAPITOB, and PIXELMAPITOA. Each of these tables must have 2n entries for some integer value of n(nmay be different for each table). For each table, the index is first rounded to the nearest integer; the result is ANDed with 2n 1, and the resulting value used as an address into the table. The indexed value becomes an R, G, B, or A value, as appropriate. The group of four elements so obtained replaces the index, changing the group’s type to RGBA component.

If RGBA component groups are not required, and if MAPCOLORis enabled, then the index is looked up in the PIXELMAPITOItable (otherwise, the index is not looked up). Again, the table must have 2n entries for some integer n. The index is first rounded to the nearest integer; the result is ANDed with 2n 1, and the resulting value used as an address into the table. The value in the table replaces the index. The floating-point table value is first rounded to a fixed-point value with unspecified precision. The group’s type remains color index.

Stencil Index Lookup

This step applies only to stencil index groups, and to the stencil indices in depth/stencil groups. If MAPSTENCILis enabled, then the index is looked up in the PIXELMAPSTOStable (otherwise, the index is not looked up). The table must have 2n entries for some integer n. The integer index is ANDed with 2n 1, and the resulting value used as an address into the table. The integer value in the table replaces the index.

3.7. PIXEL RECTANGLES

Base Internal Format R G B A
ALPHA At
LUMINANCE Lt Lt Lt
LUMINANCE ALPHA Lt Lt Lt At
INTENSITY It It It It
RGB Rt Gt Bt
RGBA Rt Gt Bt At

Table 3.13: Color table lookup. Rt, Gt, Bt, At, Lt, and Itare color table values that are assigned to pixel components R, G, B, and Adepending on the table format. When there is no assignment, the component value is left unchanged by lookup.

Color Table Lookup

This step applies only to RGBA component groups. Color table lookup is only done if COLORTABLEis enabled. If a zero-width table is enabled, no lookup is performed.

The internal format of the table determines which components of the group will be replaced (see table 3.13). The components to be replaced are converted to indices by clamping to [0,1], multiplying by an integer one less than the width of the table, and rounding to the nearest integer. Components are replaced by the table entry at the index.

The required state is one bit indicating whether color table lookup is enabled or disabled. In the initial state, lookup is disabled.

Convolution

This step applies only to RGBA component groups. If CONVOLUTION1Dis enabled, the one-dimensional convolution filter is applied only to the one-dimensional texture images passed to TexImage1D, TexSubImage1D, Copy-TexImage1D, and CopyTexSubImage1D. If CONVOLUTION2Dis enabled, the two-dimensional convolution filter is applied only to the two-dimensional im-ages passed to DrawPixels, CopyPixels, ReadPixels, TexImage2D, TexSubIm-age2D, CopyTexImage2D, CopyTexSubImage2D, and CopyTexSubImage3D. If SEPARABLE2Dis enabled, and CONVOLUTION2Dis disabled, the separable two-dimensional convolution filter is instead applied these images.

The convolution operation is a sum of products of source image pixels and convolution filter pixels. Source image pixels always have four components: red,

3.7. PIXEL RECTANGLES

Base Filter Format R G B A
ALPHA Rs Gs Bs As ∗ Af
LUMINANCE Rs ∗ Lf Gs ∗ Lf Bs ∗ Lf As
LUMINANCE ALPHA Rs ∗ Lf Gs ∗ Lf Bs ∗ Lf As ∗ Af
INTENSITY Rs ∗ If Gs ∗ If Bs ∗ If As ∗ If
RGB Rs ∗ Rf Gs ∗ Gf Bs ∗ Bf As
RGBA Rs ∗ Rf Gs ∗ Gf Bs ∗ Bf As ∗ Af

Table 3.14: Computation of filtered color components depending on filter image format. C Findicates the convolution of image component Cwith filter F.

green, blue, and alpha, denoted in the equations below as Rs, Gs, Bs, and As. Filter pixels may be stored in one of five formats, with 1, 2, 3, or 4 components. These components are denoted as Rf, Gf, Bf, Af, Lf, and Ifin the equations below. The result of the convolution operation is the 4-tuple R,G,B,A. Depending on the internal format of the filter, individual color components of each source image pixel are convolved with one filter component, or are passed unmodified. The rules for this are defined in table 3.14.

The convolution operation is defined differently for each of the three convolu-tion filters. The variables Wfand Hfrefer to the dimensions of the convolution filter. The variables Wsand Hsrefer to the dimensions of the source pixel image.

The convolution equations are defined as follows, where Crefers to the filtered result, Cfrefers to the one-or two-dimensional convolution filter, and Crowand Ccolumnrefer to the two one-dimensional filters comprising the two-dimensional separable filter. Csdepends on the source image color Csand the convolution bor-der mode as described below. Cr, the filtered output image, depends on all of these variables and is described separately for each border mode. The pixel indexing nomenclature is decribed in the Convolution Filter Specification subsection of section 3.7.3.

One-dimensional filter:

Wf 1

C[i]=Cs[i+n]Cf[n]

n=0

Two-dimensional filter:

Wf 1Hf1

C[i,j]=Cs[i+n,j+m]Cf[n,m]

n=0m=0

Two-dimensional separable filter:

3.7. PIXEL RECTANGLES

Wf 1Hf 1

C[i,j]=Cs[i+n,j+m]Crow[n]Ccolumn[m]n=0m=0

If Wfof a one-dimensional filter is zero, then C[i]is always set to zero. Like-wise, if either Wfor Hfof a two-dimensional filter is zero, then C[i,j]is always set to zero.

The convolution border mode for a specific convolution filter is specified by calling

void ConvolutionParameter{if}( enumtarget,enumpname,Tparam );

where target is the name of the filter, pname is CONVOLUTIONBORDERMODE, and param is one of REDUCE, CONSTANTBORDERor REPLICATEBORDER.

Border Mode REDUCE

The width and height of source images convolved with border mode REDUCEare reduced by Wf1and Hf1, respectively. If this reduction would generate a resulting image with zero or negative width and/or height, the output is simply null, with no error generated. The coordinates of the image that results from a con-volution with border mode REDUCEare zero through WsWfin width, and zero through HsHfin height. In cases where errors can result from the specification of invalid image dimensions, it is these resulting dimensions that are tested, not the dimensions of the source image. (A specific example is TexImage1D and Tex-Image2D, which specify constraints for image dimensions. Even if TexImage1D or TexImage2D is called with a null pixel pointer, the dimensions of the result-ing texture image are those that would result from the convolution of the specified image).

When the border mode is REDUCE, Csequals the source image color Csand Crequals the filtered result C.

For the remaining border modes, define Cw=Wf/2and Ch=Hf/2. The coordinates (Cw,Ch)define the center of the convolution filter.

Border Mode CONSTANTBORDER

If the convolution border mode is CONSTANTBORDER, the output image has the same dimensions as the source image. The result of the convolution is the same as if the source image were surrounded by pixels with the same color as the

3.7. PIXEL RECTANGLES

current convolution border color. Whenever the convolution filter extends be-yond one of the edges of the source image, the constant-color border pixels are used as input to the filter. The current convolution border color is set by call-ing ConvolutionParameterfv or ConvolutionParameteriv with pname set to CONVOLUTIONBORDERCOLORand params containing four values that comprise the RGBA color to be used as the image border. Integer color components are inter-preted linearly such that the largest positive integer maps to 1.0, and the smallest negative integer maps to -1.0. Floating point color components are not clamped when they are specified.

For a one-dimensional filter, the result color is defined by

Cr[i]=C[iCw]

where C[i]is computed using the following equation for Cs[i]:

Cs[i]= Cs[i],0i<WsCc,otherwise

and Ccis the convolution border color.

For a two-dimensional or two-dimensional separable filter, the result color is defined by

Cr[i,j]=C[iCw,jCh]

where C[i,j]is computed using the following equation for Cs[i,j]:

Cs[i,j]= Cs[i,j],0i<Ws,0j<HsCc,otherwise

Border Mode REPLICATEBORDER

The convolution border mode REPLICATEBORDERalso produces an output im-age with the same dimensions as the source image. The behavior of this mode is identical to that of the CONSTANTBORDERmode except for the treatment of pixel locations where the convolution filter extends beyond the edge of the source im-age. For these locations, it is as if the outermost one-pixel border of the source image was replicated. Conceptually, each pixel in the leftmost one-pixel column of the source image is replicated Cwtimes to provide additional image data along the left edge, each pixel in the rightmost one-pixel column is replicated Cwtimes to provide additional image data along the right edge, and each pixel value in the top and bottom one-pixel rows is replicated to create Chrows of image data along

3.7. PIXEL RECTANGLES

the top and bottom edges. The pixel value at each corner is also replicated in order to provide data for the convolution operation at each corner of the source image.

For a one-dimensional filter, the result color is defined by

Cr[i]=C[iCw]

where C[i]is computed using the following equation for Cs[i]:

Cs[i]=Cs[clamp(i,Ws)]

and the clamping function clamp(val,max)is defined as

⎨0,val<0clamp(val,max)=val, 0 val < max max 1, val max

For a two-dimensional or two-dimensional separable filter, the result color is defined by

Cr[i,j]=C[iCw,jCh]

where C[i,j]is computed using the following equation for Cs[i,j]:

Cs[i,j]=Cs[clamp(i,Ws),clamp(j,Hs)]

If a convolution operation is performed, each component of the resulting image is scaled by the corresponding PixelTrans-fer parameters: POSTCONVOLUTIONREDSCALEfor an R com-ponent, POSTCONVOLUTIONGREENSCALEfor a G compo-nent, POSTCONVOLUTIONBLUESCALEfor a B component, and POSTCONVOLUTIONALPHASCALEfor an A component. The result is added to the corresponding bias: POSTCONVOLUTIONREDBIAS, POSTCONVOLUTIONGREENBIAS, POSTCONVOLUTIONBLUEBIAS, or POSTCONVOLUTIONALPHABIAS.

The required state is three bits indicating whether each of one-dimensional, two-dimensional, or separable two-dimensional convolution is enabled or disabled, an integer describing the current convolution border mode, and four floating-point values specifying the convolution border color. In the initial state, all convolu-tion operations are disabled, the border mode is REDUCE, and the border color is (0,0,0,0).

3.7. PIXEL RECTANGLES

Post Convolution Color Table Lookup

This step applies only to RGBA component groups. Post convolution color table lookup is enabled or disabled by calling Enable or Disable with the symbolic constant POSTCONVOLUTIONCOLORTABLE. The post convo-lution table is defined by calling ColorTable with a target argument of POSTCONVOLUTIONCOLORTABLE. In all other respects, operation is identical to color table lookup, as defined earlier in section 3.7.5.

The required state is one bit indicating whether post convolution table lookup is enabled or disabled. In the initial state, lookup is disabled.

Color Matrix Transformation

This step applies only to RGBA component groups. The components are transformed by the color matrix. Each transformed component is multiplied by an appropriate signed scale factor: POSTCOLORMATRIXREDSCALEfor an R component, POSTCOLORMATRIXGREENSCALEfor a G component, POSTCOLORMATRIXBLUESCALEfor a B component, and POSTCOLORMATRIXALPHASCALEfor an A component. The

result is added to a signed bias: POSTCOLORMATRIXREDBIAS,
POSTCOLORMATRIXGREENBIAS, POSTCOLORMATRIXBLUEBIAS, or
POSTCOLORMATRIXALPHABIAS. The resulting components replace each
component of the original group.

That is, if Mcis the color matrix, a subscript of srepresents the scale term for a component, and a subscript of brepresents the bias term, then the components



⎜⎜⎝

R G B A

⎟⎟⎠

are transformed to

⎞⎛⎞

⎞⎛

RRs000RRb

⎜⎜⎝

G�

B�

⎟⎟⎠

=

⎜⎜⎝

0 Gs 00

00 Bs 0

⎟⎟⎠

Mc

⎜⎜⎝

G

B

⎟⎟⎠

+

⎜⎜⎝

Gb

Bb

⎟⎟⎠

.

A000AsAAb

Post Color Matrix Color Table Lookup

This step applies only to RGBA component groups. Post color matrix color table lookup is enabled or disabled by calling Enable or Disable

3.7. PIXEL RECTANGLES

with the symbolic constant POSTCOLORMATRIXCOLORTABLE. The post color matrix table is defined by calling ColorTable with a target argument of POSTCOLORMATRIXCOLORTABLE. In all other respects, operation is identical to color table lookup, as defined in section 3.7.5.

The required state is one bit indicating whether post color matrix lookup is enabled or disabled. In the initial state, lookup is disabled.

Histogram

This step applies only to RGBA component groups. Histogram operation is enabled or disabled by calling Enable or Disable with the symbolic constant HISTOGRAM.

If the width of the table is non-zero, then indices Ri, Gi, Bi, and Aiare de-rived from the red, green, blue, and alpha components of each pixel group (without modifying these components) by clamping each component to [0,1], multiplying by one less than the width of the histogram table, and rounding to the nearest in-teger. If the format of the HISTOGRAMtable includes red or luminance, the red or luminance component of histogram entry Riis incremented by one. If the format of the HISTOGRAMtable includes green, the green component of histogram entry Giis incremented by one. The blue and alpha components of histogram entries Biand Aiare incremented in the same way. If a histogram entry component is incremented beyond its maximum value, its value becomes undefined; this is not an error.

If the Histogram sink parameter is FALSE, histogram operation has no effect on the stream of pixel groups being processed. Otherwise, all RGBA pixel groups are discarded immediately after the histogram operation is completed. Because histogram precedes minmax, no minmax operation is performed. No pixel frag-ments are generated, no change is made to texture memory contents, and no pixel values are returned. However, texture object state is modified whether or not pixel groups are discarded.

Minmax

This step applies only to RGBA component groups. Minmax operation is enabled or disabled by calling Enable or Disable with the symbolic constant MINMAX.

If the format of the minmax table includes red or luminance, the red compo-nent value replaces the red or luminance value in the minimum table element if and only if it is less than that component. Likewise, if the format includes red or luminance and the red component of the group is greater than the red or luminance value in the maximum element, the red group component replaces the red or lumi-

3.7. PIXEL RECTANGLES

nance maximum component. If the format of the table includes green, the green group component conditionally replaces the green minimum and/or maximum if it is smaller or larger, respectively. The blue and alpha group components are similarly tested and replaced, if the table format includes blue and/or alpha. The internal type of the minimum and maximum component values is floating point, with at least the same representable range as a floating point number used to rep-resent colors (section 2.1.1). There are no semantics defined for the treatment of group component values that are outside the representable range.

If the Minmax sink parameter is FALSE, minmax operation has no effect on the stream of pixel groups being processed. Otherwise, all RGBA pixel groups are discarded immediately after the minmax operation is completed. No pixel frag-ments are generated, no change is made to texture memory contents, and no pixel values are returned. However, texture object state is modified whether or not pixel groups are discarded.

3.7.6 Pixel Rectangle Multisample Rasterization

If MULTISAMPLEis enabled, and the value of SAMPLEBUFFERSis one, then pixel rectangles are rasterized using the following algorithm. Let (Xrp,Yrp)be the cur-rent raster position. (If the current raster position is invalid, then DrawPixels is ignored.) If a particular group (index or components) is the nth in a row and be-longs to the mth row, consider the region in window coordinates bounded by the rectangle with corners

(Xrp+Zxn,Yrp+Zym)

and

(Xrp+Zx(n+1),Yrp+Zy(m + 1))

where Zxand Zyare the pixel zoom factors specified by PixelZoom, and may each be either positive or negative. A fragment representing group (n,m)is produced for each framebuffer pixel with one or more sample points that lie inside, or on the bottom or left boundary, of this rectangle. Each fragment so produced takes its associated data from the group and from the current raster position, in a manner consistent with the discussion in the Conversion to Fragments subsection of sec-tion 3.7.4. All depth and color sample values are assigned the same value, taken either from their group (for depth and color component groups) or from the cur-rent raster position (if they are not). All sample values are assigned the same fog coordinate and the same set of texture coordinates, taken from the current raster position.

3.8. BITMAPS 173

A single pixel rectangle will generate multiple, perhaps very many fragments for the same framebuffer pixel, depending on the pixel zoom factors.

3.8 Bitmaps

Bitmaps are rectangles of zeros and ones specifying a particular pattern of frag-ments to be produced. Each of these fragments has the same associated data. These data are those associated with the current raster position.

Bitmaps are sent using

voidBitmap( sizeiw,sizeih,floatxbo,floatybo,floatxbi,floatybi,ubyte*data );

wand hcomprise the integer width and height of the rectangular bitmap, respec-tively. (xbo,ybo)gives the floating-point xand yvalues of the bitmap’s origin. (xbi,ybi)gives the floating-point xand yincrements that are added to the raster position after the bitmap is rasterized. data is a pointer to a bitmap.

Like a polygon pattern, a bitmap is unpacked from memory according to the procedure given in section 3.7.4 for DrawPixels; it is as if the width and height passed to that command were equal to wand h, respectively, the type were BITMAP, and the format were COLORINDEX. The unpacked values (before any conversion or arithmetic would have been performed) form a stipple pattern of zeros and ones. See figure 3.9.

A bitmap sent using Bitmap is rasterized as follows. First, if the current raster position is invalid (the valid bit is reset), the bitmap is ignored. Otherwise, a rect-angular array of fragments is constructed, with lower left corner at

(xll,yll)=(xrpxbo, yrpybo)

and upper right corner at (xll+w,yll+h)where wand hare the width and height of the bitmap, respectively. Fragments in the array are produced if the corresponding bit in the bitmap is 1and not produced otherwise. The associated data for each fragment are those associated with the current raster position. Once the fragments have been produced, the current raster position is updated:

(xrp,yrp)(xrp+xbi,yrp+ybi).

The z and w values of the current raster position remain unchanged.

Calling Bitmap will result in an INVALIDFRAMEBUFFEROPERATIONerror if the object bound to DRAWFRAMEBUFFERBINDINGis not framebuffer complete (see section 4.4.4).

3.8. BITMAPS

Bitmap Multisample Rasterization

If MULTISAMPLEis enabled, and the value of SAMPLEBUFFERSis one, then bitmaps are rasterized using the following algorithm. If the current raster position is invalid, the bitmap is ignored. Otherwise, a screen-aligned array of pixel-size rectangles is constructed, with its lower left corner at (Xrp,Yrp), and its upper right corner at (Xrp+w,Yrp+h), where wand hare the width and height of the bitmap. Rectangles in this array are eliminated if the corresponding bit in the bitmap is 0, and are retained otherwise. Bitmap rasterization produces a fragment for each framebuffer pixel with one or more sample points either inside or on the bottom or left edge of a retained rectangle.

Coverage bits that correspond to sample points either inside or on the bottom or left edge of a retained rectangle are 1, other coverage bits are 0. The associated data for each sample are those associated with the current raster position. Once the fragments have been produced, the current raster position is updated exactly as it is in the single-sample rasterization case.

3.9. TEXTURING 175

3.9 Texturing

Texturing maps a portion of one or more specified images onto each primitive for which texturing is enabled. This mapping is accomplished by using the color of an image at the location indicated by a texture coordinate set’s (s,t,r,q)cordinates.

The internal data type of a texture may be fixed-point, floating-point, signed integer or unsigned integer, depending on the internal format of the texture. The correspondence between the internal format and the internal data type is given in tables 3.16- 3.18 . Fixed-point and floating-point textures return a floating-point value and integer textures return signed or unsigned integer values. When a fragment shader is active, the shader is responsible for interpreting the result of a texture lookup as the correct data type, otherwise the result is undefined. When not us-ing a fragment shader, floating-point texture values are assumed, and the results of using integer textures in this case are undefined.

Six types of texture are supported; each is a collection of images built from one-, two-, or three-dimensional array of image elements referred to as texels. One-, two-, and three-dimensional textures consist respectively of one-, two-, or three-dimensional texel arrays. One-and two-dimensional array textures are arrays of one-or two-dimensional images, consisting of one or more layers. Finally, a cube map is a special two-dimensional array texture with six layers that represent the faces of a cube. When accessing a cube map, the texture coordinates are projected onto one of the six faces of the cube.

Implementations must support texturing using at least two images at a time. Each fragment or vertex carries multiple sets of texture coordinates (s,t,r,q)which are used to index separate images to produce color values which are collec-tively used to modify the resulting transformed vertex or fragment color. Texturing is specified only for RGBA mode; its use in color index mode is undefined. The following subsections (up to and including section 3.9.7) specify the GL operation with a single texture and section 3.9.17 specifies the details of how multiple texture units interact.

The GL provides two ways to specify the details of how texturing of a primitive is effected. The first is referred to as fixed-function fragment shading, or simply fixed-function, and is described in this section. The second is referred to as a fragment shader, and is described in section 3.12. The specification of the image to be texture mapped and the means by which the image is filtered when applied to the primitive are common to both methods and are discussed in this section. The fixed-function method for determining what RGBA value is produced is also described in this section. If a fragment shader is active, the method for determining the RGBA value is specified by an application-supplied fragment shader as described in the OpenGL Shading Language Specification.

3.9. TEXTURING

When no fragment shader is active, the coordinates used for texturing are (s/q,t/q,r/q), derived from the original texture coordinates (s,t,r,q). If the qtexture coordinate is less than or equal to zero, the coordinates used for texturing are undefined. When a fragment shader is active, the (s,t,r,q)coordinates are available to the fragment shader. The coordinates used for texturing in a fragment shader are defined by the OpenGL Shading Language Specification.

3.9.1 Texture Image Specification

The command

voidTexImage3D( enumtarget,intlevel,intinternalformat,sizeiwidth,sizeiheight,sizeidepth,intborder,enumformat,enumtype,void*data );

is used to specify a three-dimensional texture image. target must be one of TEXTURE3Dfor a three-dimensional texture or TEXTURE2DARRAYfor an two-dimensional array texture. Additionally, target may be either PROXYTEXTURE3Dfor a three-dimensional proxy texture, or PROXYTEXTURE2DARRAYfor a two-dimensional proxy array texture, as discussed in section 3.9.11. format, type, and data match the corresponding arguments to DrawPixels (refer to section 3.7.4); they specify the format of the image data, the type of those data, and a reference to the image data in the currently bound pixel unpack buffer or client memory. The format STENCILINDEXis not allowed.

The groups in memory are treated as being arranged in a sequence of ad-jacent rectangles. Each rectangle is a two-dimensional image, whose size and organization are specified by the width and height parameters to TexImage3D. The values of UNPACKROWLENGTHand UNPACKALIGNMENTcontrol the row-to-row spacing in these images in the same manner as DrawPixels. If the value of the integer parameter UNPACKIMAGEHEIGHTis not positive, then the number of rows in each two-dimensional image is height; otherwise the number of rows is UNPACKIMAGEHEIGHT. Each two-dimensional image comprises an integral number of rows, and is exactly adjacent to its neighbor images.

The mechanism for selecting a sub-volume of a three-dimensional image re-lies on the integer parameter UNPACKSKIPIMAGES. If UNPACKSKIPIMAGESis positive, the pointer is advanced by UNPACKSKIPIMAGEStimes the number of elements in one two-dimensional image before obtaining the first group from mem-ory. Then depth two-dimensional images are processed, each having a subimage extracted in the same manner as DrawPixels.

The selected groups are processed exactly as for DrawPixels, stopping just be-fore final conversion. If the internalformat of the texture is signed or unsigned

3.9. TEXTURING

integer, the components are clamped to the representable range of the internal for-mat. For signed formats, this is [2n1 , 2n1 1]where nis the number of bits per component; for unsigned formats, the range is [0,2n 1]. For color com-ponent groups, if the internalformat of the texture is fixed-point, the R, G, B, and A values are clamped to [0,1]. For depth component groups, the depth value is clamped to [0,1]. Otherwise, values are not modified. Stencil index values are masked by 2n 1, where nis the number of stencil bits in the internal format res-olution (see below). If the base internal format is DEPTHSTENCILand format is not DEPTHSTENCIL, then the values of the stencil index texture components are undefined.

Components are then selected from the resulting R, G, B, A, depth, or stencil values to obtain a texture with the base internal format specified by (or derived from) internalformat. Table 3.15 summarizes the mapping of R, G, B, A, depth, or stencil values to texture components, as a function of the base internal format of the texture image. internalformat may be specified as one of the internal format symbolic constants listed in table 3.15, as one of the sized internal format symbolic constants listed in tables 3.16- 3.18, as one of the generic compressed internal for- mat symbolic constants listed in table 3.19, or as one of the specific compressed internal format symbolic constants (if listed in table 3.19). internalformat may (for backwards compatibility with the 1.0 version of the GL) also take on the integer values 1, 2, 3, and 4, which are equivalent to symbolic constants LUMINANCE, LUMINANCEALPHA, RGB, and RGBArespectively. Specifying a value for internal-format that is not one of the above values generates the error INVALIDVALUE.

Textures with a base internal format of DEPTH COMPONENT or
DEPTH STENCIL are supported by texture image specification com
mands only if target is TEXTURE1D, TEXTURE2D, TEXTURE1DARRAY,
TEXTURE2DARRAY, TEXTURECUBEMAP, PROXYTEXTURE1D,

PROXYTEXTURE2D, PROXYTEXTURE1DARRAY, PROXYTEXTURE2DARRAY, or PROXYTEXTURECUBEMAP. Using these formats in conjunction with any other target will result in an INVALIDOPERATIONerror.

Textures with a base internal format of DEPTHCOMPONENTor DEPTHSTENCILrequire either depth component data or depth/stencil com-ponent data. Textures with other base internal formats require RGBA component data. The error INVALIDOPERATIONis generated if one of the base internal format and format is DEPTHCOMPONENTor DEPTHSTENCIL, and the other is neither of these values.

Textures with integer internal formats tables 3.16- 3.17 require integer data. The error INVALIDOPERATIONis generated if the internal format is integer and format is not one of the integer formats listed in table 3.6; if the internal format is not integer and format is an integer format; or if format is an integer format and

3.9. TEXTURING

178

Base Internal Format RGBA, Depth, and Stencil Values Internal Components
ALPHA A A
DEPTH COMPONENT Depth D
DEPTH STENCIL Depth,Stencil D,S
LUMINANCE R L
LUMINANCE ALPHA R,A L,A
INTENSITY R I
RED R R
RG R,G R,G
RGB R,G,B R,G,B
RGBA R,G,B,A R,G,B,A

Table 3.15: Conversion from RGBA, depth, and stencil pixel components to inter-nal texture, table, or filter components. See section 3.9.13 for a description of the texture components R, G, B, A, L, I, D, and S.

type is FLOAT.

The GL provides no specific compressed internal formats but does provide a mechanism to obtain token values for such formats provided by extensions. The number of specific compressed internal formats supported by the renderer can be obtained by querying the value of NUMCOMPRESSEDTEXTUREFORMATS. The set of specific compressed internal formats supported by the renderer can be ob-tained by querying the value of COMPRESSEDTEXTUREFORMATS. The only val-ues returned by this query are those corresponding to formats suitable for general-purpose usage. The renderer will not enumerate formats with restrictions that need to be specifically understood prior to use.

Generic compressed internal formats are never used directly as the internal for-mats of texture images. If internalformat is one of the six generic compressed internal formats, its value is replaced by the symbolic constant for a specific com-pressed internal format of the GL’s choosing with the same base internal format. If no specific compressed format is available, internalformat is instead replaced by the corresponding base internal format. If internalformat is given as or mapped to a specific compressed internal format, but the GL can not support images com-pressed in the chosen internal format for any reason (e.g., the compression format might not support 3D textures or borders), internalformat is replaced by the corre-sponding base internal format and the texture image will not be compressed by the GL.

3.9. TEXTURING

The internal component resolution is the number of bits allocated to each value in a texture image. If internalformat is specified as a base internal format, the GL stores the resulting texture with internal component resolutions of its own choos-ing. If a sized internal format is specified, the mapping of the R, G, B, A, depth, and stencil values to texture components is equivalent to the mapping of the cor-responding base internal format’s components, as specified in table 3.15; the type (unsigned int, float, etc.) is assigned the same type specified by internalformat; and the memory allocation per texture component is assigned by the GL to match the allocations listed in tables 3.16- 3.18 as closely as possible. (The definition of closely is left up to the implementation. However, a non-zero number of bits must be allocated for each component whose desired allocation in tables 3.16- 3.18 is non-zero, and zero bits must be allocated for all other components).

Required Texture Formats

Implementations are required to support at least one allocation of internal com-ponent resolution for each type (unsigned int, float, etc.) for each base internal format.

In addition, implementations are required to support the following sized in-ternal formats. Requesting one of these internal formats for any texture type will allocate exactly the internal component sizes and types shown for that format in tables 3.16- 3.17:

Color formats:

RGBA32F, RGBA32I, RGBA32UI, RGBA16, RGBA16F, RGBA16I, RGBA16UI, RGBA8, RGBA8I, RGBA8UI, SRGB8ALPHA8, and RGB10A2.
R11FG11FB10F.
RG32F, RG32I, RG32UI, RG16, RG16F, RG16I, RG16UI, RG8, RG8I, and RG8UI.
R32F, R32I, R32UI, R16F, R16I, R16UI, R16R8, R8I, and R8UI.
ALPHA8.

Color formats (texture-only):

RGB32F, RGB32I, and RGB32UI.
RGB16F, RGB16I, RGB16UI, and RGB16.
RGB8, RGB8I, RGB8UI, and SRGB8.
3.9. TEXTURING
RGB9E5.
COMPRESSEDRGRGTC2and COMPRESSEDSIGNEDRGRGTC2.
COMPRESSEDREDRGTC1and COMPRESSEDSIGNEDREDRGTC1.

Depth formats: DEPTHCOMPONENT32F, DEPTHCOMPONENT24, and DEPTHCOMPONENT16,

Combined depth+stencil formats: DEPTH32FSTENCIL8and DEPTH24STENCIL8.

Encoding of Special Internal Formats

If internalformat is R11FG11FB10F, the red, green, and blue bits are converted to unsigned 11-bit, unsigned 11-bit, and unsigned 10-bit floating-point values as described in sections 2.1.3 and 2.1.4.

If internalformat is RGB9E5, the red, green, and blue bits are converted to a shared exponent format according to the following procedure:

Components red, green, and blueare first clamped (in the process, mapping NaNto zero) as follows:

redc=max(0,min(sharedexpmax,red))

greenc=max(0,min(sharedexpmax,green))

bluec=max(0,min(sharedexpmax,blue))

where

sharedexpmax=(2N 1)2EmaxB .

2N

Nis the number of mantissa bits per component (9), Bis the exponent bias (15), and Emaxis the maximum allowed biased exponent value (31).

The largest clamped component, maxc, is determined:

maxc=max(redc,greenc,bluec)

A preliminary shared exponent exppis computed:

expp=max(B 1, log2(maxc))+1+ B

A refined shared exponent expsis computed:

maxs = maxc +0.5

2exppBN

Version 3.0 -August 11, 2008

3.9. TEXTURING

expp,0maxs < 2N exps =

expp+1,maxs=2N Finally, three integer values in the range 0to 2N 1 are computed:

reds = redc +0.5

2expsBN

greens = greenc +0.5

2expsBN

blues = bluec +0.5

2expsBN

The resulting reds, greens, blues, and expsare stored in the red, green, blue, and shared bits respectively of the texture image.

An implementation accepting pixel data of type UNSIGNEDINT5999REVwith format RGBis allowed to store the components “as is” if the implementation can determine the current pixel transfer state acts as an identity transform on the components.

Sized Internal Format Base Internal Format R bits G bits B bits A bits Shared bits
ALPHA4 ALPHA 4
ALPHA8 ALPHA 8
ALPHA12 ALPHA 12
ALPHA16 ALPHA 16
R8 RED 8
R16 RED 16
RG8 RG 8 8
RG16 RG 16 16
R3 G3 B2 RGB 3 3 2
RGB4 RGB 4 4 4
RGB5 RGB 5 5 5
RGB8 RGB 8 8 8
RGB10 RGB 10 10 10
RGB12 RGB 12 12 12
RGB16 RGB 16 16 16
Sized internal color formats continued on next page

3.9. TEXTURING

Sized internal color formats continued from previous page
Sized Internal Format Base Internal Format R bits G bits B bits A bits Shared bits
RGBA2 RGBA 2 2 2 2
RGBA4 RGBA 4 4 4 4
RGB5 A1 RGBA 5 5 5 1
RGBA8 RGBA 8 8 8 8
RGB10 A2 RGBA 10 10 10 2
RGBA12 RGBA 12 12 12 12
RGBA16 RGBA 16 16 16 16
SRGB8 RGB 8 8 8
SRGB8 ALPHA8 RGBA 8 8 8 8
R16F RED f16
RG16F RG f16 f16
RGB16F RGB f16 f16 f16
RGBA16F RGBA f16 f16 f16 f16
R32F RED f32
RG32F RG f32 f32
RGB32F RGB f32 f32 f32
RGBA32F RGBA f32 f32 f32 f32
R11F G11F B10F RGB f11 f11 f10
RGB9 E5 RGB 9 9 9 5
R8I RED i8
R8UI RED ui8
R16I RED i16
R16UI RED ui16
R32I RED i32
R32UI RED ui32
RG8I RG i8 i8
RG8UI RG ui8 ui8
RG16I RG i16 i16
RG16UI RG ui16 ui16
RG32I RG i32 i32
RG32UI RG ui32 ui32
RGB8I RGB i8 i8 i8
RGB8UI RGB ui8 ui8 ui8
RGB16I RGB i16 i16 i16
Sized internal color formats continued on next page

3.9. TEXTURING

Sized internal color formats continued from previous page
Sized Internal Format Base Internal Format R bits G bits B bits A bits Shared bits
RGB16UI RGB ui16 ui16 ui16
RGB32I RGB i32 i32 i32
RGB32UI RGB ui32 ui32 ui32
RGBA8I RGBA i8 i8 i8 i8
RGBA8UI RGBA ui8 ui8 ui8 ui8
RGBA16I RGBA i16 i16 i16 i16
RGBA16UI RGBA ui16 ui16 ui16 ui16
RGBA32I RGBA i32 i32 i32 i32
RGBA32UI RGBA ui32 ui32 ui32 ui32

Table 3.16: Correspondence of sized internal color formats to base internal formats, internal data type, and desired component reso-lutions for each sized internal format. The component resolution prefix indicates the internal data type: f is floating point, i is signed integer, ui is unsigned integer, and no prefix is fixed-point.

Sized Internal Format Base Internal Format A bits L bits I bits
LUMINANCE4 LUMINANCE 4
LUMINANCE8 LUMINANCE 8
LUMINANCE12 LUMINANCE 12
LUMINANCE16 LUMINANCE 16
LUMINANCE4 ALPHA4 LUMINANCE ALPHA 4 4
LUMINANCE6 ALPHA2 LUMINANCE ALPHA 2 6
LUMINANCE8 ALPHA8 LUMINANCE ALPHA 8 8
LUMINANCE12 ALPHA4 LUMINANCE ALPHA 4 12
LUMINANCE12 ALPHA12 LUMINANCE ALPHA 12 12
LUMINANCE16 ALPHA16 LUMINANCE ALPHA 16 16
INTENSITY4 INTENSITY 4
INTENSITY8 INTENSITY 8
INTENSITY12 INTENSITY 12
INTENSITY16 INTENSITY 16
Sized internal luminance formats continued on next page

3.9. TEXTURING

Sized Internal Format Base Internal Format D bits S bits
DEPTH COMPONENT16 DEPTH COMPONENT 16
DEPTH COMPONENT24 DEPTH COMPONENT 24
DEPTH COMPONENT32 DEPTH COMPONENT 32
DEPTH COMPONENT32F DEPTH COMPONENT f32
DEPTH24 STENCIL8 DEPTH STENCIL 24 8
DEPTH32F STENCIL8 DEPTH STENCIL f32 8

Table 3.18: Correspondence of sized internal depth and stencil formats to base internal formats, internal data type, and desired component resolutions for each sized internal format. The component resolution prefix indicates the internal data type: f is floating point, i is signed integer, ui is unsigned integer, and no prefix is fixed-point.

Sized internal luminance formats continued from previous page
Sized Internal Format Base Internal Format A bits L bits I bits
SLUMINANCE LUMINANCE 8
SLUMINANCE ALPHA8 LUMINANCE ALPHA 8 8

Table 3.17: Correspondence of sized internal luminance and inten-sity formats to base internal formats, internal data type, and desired component resolutions for each sized internal format. The compo-nent resolution prefix indicates the internal data type: f is floating point, i is signed integer, ui is unsigned integer, and no prefix is fixed-point.

If a compressed internal format is specified, the mapping of the R, G, B, and A values to texture components is equivalent to the mapping of the corresponding base internal format’s components, as specified in table 3.15. The specified image is compressed using a (possibly lossy) compression algorithm chosen by the GL.

A GL implementation may vary its allocation of internal component resolution or compressed internal format based on any TexImage3D, TexImage2D (see be-low), or TexImage1D (see below) parameter (except target), but the allocation and chosen compressed image format must not be a function of any other state and can-not be changed once they are established. In addition, the choice of a compressed

3.9. TEXTURING

Compressed Internal Format Base Internal Format Type
COMPRESSED ALPHA ALPHA Generic
COMPRESSED LUMINANCE LUMINANCE Generic
COMPRESSED LUMINANCE ALPHA LUMINANCE ALPHA Generic
COMPRESSED INTENSITY INTENSITY Generic
COMPRESSED RED RED Generic
COMPRESSED RG RG Generic
COMPRESSED RGB RGB Generic
COMPRESSED RGBA RGBA Generic
COMPRESSED SRGB RGB Generic
COMPRESSED SRGB ALPHA RGBA Generic
COMPRESSED SLUMINANCE LUMINANCE Generic
COMPRESSED SLUMINANCE ALPHA LUMINANCE ALPHA Generic
COMPRESSED RED RGTC1 RED Specific
COMPRESSED SIGNED RED RGTC1 RED Specific
COMPRESSED RG RGTC2 RG Specific
COMPRESSED SIGNED RG RGTC2 RG Specific

Table 3.19: Generic and specific compressed internal formats. The specific *RGTC*formats are described in appendix C.1.

3.9. TEXTURING

image format may not be affected by the data parameter. Allocations must be in-variant; the same allocation and compressed image format must be chosen each time a texture image is specified with the same parameter values. These allocation rules also apply to proxy textures, which are described in section 3.9.11.

The image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice; and depth slices are stacked from back to front. When the final R, G, B, and A components have been computed for a group, they are assigned to components of a texel as described by table 3.15. Counting from zero, each resulting Nth texel is assigned internal integer coordi-nates (i,j,k), where

i=(Nmodwidth)wb N

j =(widthmod height) hb

N

k =(width× height� mod depth) db and wb, hb, and dbare the specified border width, height, and depth. wband hbare the specified border value; dbis the specified border value if target is TEXTURE3D, or zero if target is TEXTURE2DARRAY. Thus the last two-dimensional image slice of the three-dimensional image is indexed with the highest value of k.

Each color component is converted (by rounding to nearest) to a fixed-point value with nbits, where nis the number of bits of storage allocated to that com-ponent in the image array. We assume that the fixed-point representation used represents each value k/(2n 1), where k∈{0,1,...,2n 1}, as k(e.g. 1.0 is represented in binary as a string of all ones).

The level argument to TexImage3D is an integer level-of-detail number. Levels of detail are discussed below, under Mipmapping. The main texture image has a level of detail number of 0. If a level-of-detail less than zero is specified, the error INVALIDVALUEis generated.

The border argument to TexImage3D is a border width. The significance of borders is described below. The border width affects the dimensions of the texture image: let

ws=wt+2wbhs=ht+2hb(3.15) ds=dt+2db

3.9. TEXTURING

where ws, hs, and dsare the specified image width, depth, and depth, and wt, ht, and dtare the dimensions of the texture image internal to the border. If wt, ht, or dtare less than zero, then the error INVALIDVALUEis generated.

An image with zero width, height, or depth indicates the null texture. If the null texture is specified for the level-of-detail specified by texture parameter TEXTUREBASELEVEL(see section 3.9.4), it is as if texturing were disabled.

Currently, the maximum border width btis1. If border is less than zero, or greater than bt, then the error INVALIDVALUEis generated.

The maximum allowable width, height, or depth of a texel array for a three-dimensional texture is an implementation dependent function of the level-of-detail and internal format of the resulting image array. It must be at least 2klod +2btfor image arrays of level-of-detail 0through k, where kis the log base 2 of MAX3DTEXTURESIZE, lodis the level-of-detail of the image array, and btis the maximum border width. It may be zero for image arrays of any level-of-detail greater than k. The error INVALIDVALUEis generated if the specified image is too large to be stored under any conditions.

If a pixel unpack buffer object is bound and storing texture data would access memory beyond the end of the pixel unpack buffer, an INVALIDOPERATIONerror results.

In a similar fashion, the maximum allowable width of a texel array for a one-or two-dimensional, or one-or two-dimensional array texture, and the maximum allowable height of a two-dimensional or two-dimensional array texture, must be at least 2klod +2btfor image arrays of level 0through k, where kis the log base 2 of MAXTEXTURESIZE. The maximum allowable width and height of a cube map texture must be the same, and must be at least 2klod +2btfor image arrays level 0 through k, where kis the log base 2 of MAXCUBEMAPTEXTURESIZE. The maximum number of layers for one-and two-dimensional array textures (height or depth, respectively) must be at least MAXARRAYTEXTURELAYERSfor all levels.

An implementation may allow an image array of level 0 to be created only if that single image array can be supported. Additional constraints on the creation of image arrays of level 1 or greater are described in more detail in section 3.9.10.

The command

voidTexImage2D( enumtarget,intlevel,intinternalformat,sizeiwidth,sizeiheight,intborder,enumformat,enumtype,void*data );

is used to specify a two-dimensional texture image. target must be one of TEXTURE2Dfor a two-dimensional texture, TEXTURE1DARRAYfor a one-dimensional array texture, or one of TEXTURECUBEMAPPOSITIVEX,

3.9. TEXTURING

TEXTURECUBEMAPNEGATIVEX, TEXTURECUBEMAPPOSITIVEY, TEXTURECUBEMAPNEGATIVEY, TEXTURECUBEMAPPOSITIVEZ, or TEXTURECUBEMAPNEGATIVEZfor a cube map texture. Additionally, tar-get may be either PROXYTEXTURE2Dfor a two-dimensional proxy texture, PROXYTEXTURE1DARRAYfor a one-dimensional proxy array texture, or PROXYTEXTURECUBEMAPfor a cube map proxy texture in the special case discussed in section 3.9.11. The other parameters match the corresponding parameters of TexImage3D.

For the purposes of decoding the texture image, TexImage2D is equivalent to calling TexImage3D with corresponding arguments and depth of 1, except that

  • The border depth, db, is zero, and the depth of the image is always 1 regard-less of the value of border.

  • The border height, hb, is zero if target is TEXTURE1DARRAY, and border otherwise.

  • Convolution will be performed on the image (possibly changing its width and height) if SEPARABLE2Dor CONVOLUTION2Dis enabled.

UNPACK SKIP IMAGES is ignored.

A two-dimensional texture consists of a single two-dimensional texture image. A cube map texture is a set of six two-dimensional texture images. The six cube map texture targets form a single cube map texture though each target names a distinct face of the cube map. The TEXTURECUBEMAP*targets listed above up-date their appropriate cube map face 2D texture image. Note that the six cube map two-dimensional image tokens such as TEXTURECUBEMAPPOSITIVEXare used when specifying, updating, or querying one of a cube map’s six two-dimensional images, but when enabling cube map texturing or binding to a cube map texture object (that is when the cube map is accessed as a whole as opposed to a particular two-dimensional image), the TEXTURECUBEMAPtarget is specified.

When the target parameter to TexImage2D is one of the six cube map two-dimensional image targets, the error INVALIDVALUEis generated if the width and height parameters are not equal.

Finally, the command

voidTexImage1D( enumtarget,intlevel,intinternalformat,sizeiwidth,intborder,enumformat,enumtype,void*data );

3.9. TEXTURING

is used to specify a one-dimensional texture image. target must be either TEXTURE1D, or PROXYTEXTURE1Din the special case discussed in sec-tion 3.9.11.)

For the purposes of decoding the texture image, TexImage1D is equivalent to calling TexImage2D with corresponding arguments and height of 1, except that

  • The border height and depth (hband db) are always zero, regardless of the value of border.

  • Convolution will be performed on the image (possibly changing its width) only if CONVOLUTION1Dis enabled.

The image indicated to the GL by the image pointer is decoded and copied into the GL’s internal memory. This copying effectively places the decoded image in-side a border of the maximum allowable width btwhether or not a border has been specified (see figure 3.10) 1 . If no border or a border smaller than the maximum allowable width has been specified, then the image is still stored as if it were sur-rounded by a border of the maximum possible width. Any excess border (which surrounds the specified image, including any border) is assigned unspecified val-ues. A two-dimensional texture has a border only at its left, right, top, and bottom ends, and a one-dimensional texture has a border only at its left and right ends.

We shall refer to the (possibly border augmented) decoded image as the texel array. A three-dimensional texel array has width, height, and depth ws, hs, and dsas defined in equation 3.15. A two-dimensional texel array has depth ds=1, with height hsand width wsas above, and a one-dimensional texel array has depth ds=1, height hs=1, and width wsas above.

An element (i,j,k)of the texel array is called a texel (for a two-dimensional texture or one-dimensional array texture, kis irrelevant; for a one-dimensional texture, jand kare both irrelevant). The texture value used in texturing a frag-ment is determined by that fragment’s associated (s,t,r)coordinates, but may not correspond to any actual texel. See figure 3.10.

If the data argument of TexImage1D, TexImage2D, or TexImage3D is a null pointer (a zero-valued pointer in the C implementation), and the pixel unpack buffer object is zero, a one-, two-, or three-dimensional texel array is created with the specified target, level, internalformat, border, width, height, and depth, but with unspecified image contents. In this case no pixel values are accessed in client memory, and no pixel processing is performed. Errors are generated, how-ever, exactly as though the data pointer were valid. Otherwise if the pixel unpack buffer object is non-zero, the data argument is treatedly normally to refer to the beginning of the pixel unpack buffer object’s data.

1 Figure 3.10 needs to show a three-dimensional texture image.

3.9. TEXTURING

Figure 3.10. A texture image and the coordinates used to access it. This is a two-dimensional texture with n=3and m=2. A one-dimensional texture would consist of a single horizontal strip. αand β, values used in blending adjacent texels to obtain a texture value, are also shown.

3.9. TEXTURING 191

3.9.2 Alternate Texture Image Specification Commands

Two-dimensional and one-dimensional texture images may also be specified us-ing image data taken directly from the framebuffer, and rectangular subregions of existing texture images may be respecified.

The command

voidCopyTexImage2D( enumtarget,intlevel,enuminternalformat,intx,inty,sizeiwidth,sizeiheight,intborder );

defines a two-dimensional texel array in exactly the manner of TexImage2D, ex-cept that the image data are taken from the framebuffer rather than from client memory. Currently, target must be one of TEXTURE2D, TEXTURE1DARRAY, TEXTURECUBEMAPPOSITIVEX, TEXTURECUBEMAPNEGATIVEX, TEXTURECUBEMAPPOSITIVEY, TEXTURECUBEMAPNEGATIVEY, TEXTURECUBEMAPPOSITIVEZ, or TEXTURECUBEMAPNEGATIVEZ. x, y, width, and height correspond precisely to the corresponding arguments to Copy-Pixels (refer to section 4.3.3); they specify the image’s width and height, and the lower left (x,y)coordinates of the framebuffer region to be copied. The image is taken from the framebuffer exactly as if these arguments were passed to CopyP-ixels with argument type set to COLOR, DEPTH, or DEPTHSTENCIL, depending on internalformat, stopping after pixel transfer processing is complete. RGBA data is taken from the current color buffer, while depth component and stencil index data are taken from the depth and stencil buffers, respectively. The er-ror INVALIDOPERATIONis generated if depth component data is required and no depth buffer is present; if stencil index data is required and no stencil buffer is present; if integer RGBA data is required and the format of the current color buffer is not integer; or if floating-or fixed-point RGBA data is required and the format of the current color buffer is integer.

Subsequent processing is identical to that described for TexImage2D, begin-ning with clamping of the R, G, B, A, or depth values, and masking of the stencil index values from the resulting pixel groups. Parameters level, internalformat, and border are specified using the same values, with the same meanings, as the equiv-alent arguments of TexImage2D, except that internalformat may not be specified as 1, 2, 3, or 4. An invalid value specified for internalformat generates the error INVALIDENUM. The constraints on width, height, and border are exactly those for the equivalent arguments of TexImage2D.

When the target parameter to CopyTexImage2D is one of the six cube map two-dimensional image targets, the error INVALIDVALUEis generated if the width and height parameters are not equal.

Version 3.0 -August 11, 2008

3.9. TEXTURING

The command

voidCopyTexImage1D( enumtarget,intlevel,enuminternalformat,intx,inty,sizeiwidth,intborder );

defines a one-dimensional texel array in exactly the manner of TexImage1D, ex-cept that the image data are taken from the framebuffer, rather than from client memory. Currently, target must be TEXTURE1D. For the purposes of decoding the texture image, CopyTexImage1D is equivalent to calling CopyTexImage2D with corresponding arguments and height of 1, except that the height of the image is always 1, regardless of the value of border. level, internalformat, and border are specified using the same values, with the same meanings, as the equivalent argu-ments of TexImage1D, except that internalformat may not be specified as 1, 2, 3, or 4. The constraints on width and border are exactly those of the equivalent arguments of TexImage1D.

Six additional commands,

voidTexSubImage3D( enumtarget,intlevel,intxoffset,intyoffset,intzoffset,sizeiwidth,sizeiheight,sizeidepth,enumformat,enumtype,void*data );

voidTexSubImage2D( enumtarget,intlevel,intxoffset,intyoffset,sizeiwidth,sizeiheight,enumformat,enumtype,void*data );

voidTexSubImage1D( enumtarget,intlevel,intxoffset,sizeiwidth,enumformat,enumtype,void*data );

voidCopyTexSubImage3D( enumtarget,intlevel,intxoffset,intyoffset,intzoffset,intx,inty,sizeiwidth,sizeiheight );

voidCopyTexSubImage2D( enumtarget,intlevel,intxoffset,intyoffset,intx,inty,sizeiwidth,sizeiheight );

voidCopyTexSubImage1D( enumtarget,intlevel,intxoffset,intx,inty,sizeiwidth );

respecify only a rectangular subregion of an existing texel array. No change is made to the internalformat, width, height, depth, or border parameters of the spec-ified texel array, nor is any change made to texel values outside the specified subregion. Currently the target arguments of TexSubImage1D and CopyTex-SubImage1D must be TEXTURE1D, the target arguments of TexSubImage2D

3.9. TEXTURING

and CopyTexSubImage2D must be one of TEXTURE2D, TEXTURE1DARRAY, TEXTURECUBEMAPPOSITIVEX, TEXTURECUBEMAPNEGATIVEX, TEXTURECUBEMAPPOSITIVEY, TEXTURECUBEMAPNEGATIVEY, TEXTURECUBEMAPPOSITIVEZ, or TEXTURECUBEMAPNEGATIVEZ, and the target arguments of TexSubImage3D and CopyTexSubImage3D must be TEXTURE3Dor TEXTURE2DARRAY. The level parameter of each command spec-ifies the level of the texel array that is modified. If level is less than zero or greater than the base 2 logarithm of the maximum texture width, height, or depth, the error INVALIDVALUEis generated.

TexSubImage3D arguments width, height, depth, format, type, and data match the corresponding arguments to TexImage3D, meaning that they are specified us-ing the same values, and have the same meanings. Likewise, TexSubImage2D arguments width, height, format, type, and data match the corresponding argu-ments to TexImage2D, and TexSubImage1D arguments width, format, type, and data match the corresponding arguments to TexImage1D.

CopyTexSubImage3D and CopyTexSubImage2D arguments x, y, width, and height match the corresponding arguments to CopyTexImage2D2. CopyTex-SubImage1D arguments x, y, and width match the corresponding arguments to CopyTexImage1D. Each of the TexSubImage commands interprets and processes pixel groups in exactly the manner of its TexImage counterpart, except that the as-signment of R, G, B, A, depth, and stencil index pixel group values to the texture components is controlled by the internalformat of the texel array, not by an argu-ment to the command. The same constraints and errors apply to the TexSubImage commands’ argument format and the internalformat of the texel array being re-specified as apply to the format and internalformat arguments of its TexImage counterparts.

Arguments xoffset, yoffset, and zoffset of TexSubImage3D and CopyTex-SubImage3D specify the lower left texel coordinates of a width-wide by height-high by depth-deep rectangular subregion of the texel array. The depth argument associated with CopyTexSubImage3D is always 1, because framebuffer memory is two-dimensional -only a portion of a single s,tslice of a three-dimensional texture is replaced by CopyTexSubImage3D.

Negative values of xoffset, yoffset, and zoffset correspond to the coordinates of border texels, addressed as in figure 3.10. Taking ws, hs, ds, wb, hb, and dbto be the specified width, height, depth, and border width, border height, and border depth of the texel array, and taking x, y, z, w, h, and dto be the xoffset, yoffset, zoffset, width, height, and depth argument values, any of the following relationships

2 Because the framebuffer is inherently two-dimensional, there is no CopyTexImage3D com-mand.

3.9. TEXTURING

generates the error INVALIDVALUE:

x< wb

x + w>ws wb

y< hb

y + h>hs hb

z< db

z + d>ds db

Counting from zero, the nth pixel group is assigned to the texel with internal integer coordinates [i,j,k], where

i = x +(n mod w) n

j = y +(w mod h) n k = z +(width heightmod d

Arguments xoffset and yoffset of TexSubImage2D and CopyTexSubImage2D specify the lower left texel coordinates of a width-wide by height-high rectangular subregion of the texel array. Negative values of xoffset and yoffset correspond to the coordinates of border texels, addressed as in figure 3.10. Taking ws, hs, and bsto be the specified width, height, and border width of the texel array, and taking x, y, w, and hto be the xoffset, yoffset, width, and height argument values, any of the following relationships generates the error INVALIDVALUE:

x< bs

x + w>ws bs

y< bs

y + h>hs bs

Counting from zero, the nth pixel group is assigned to the texel with internal integer coordinates [i,j], where

i = x +(n mod w) n

j = y +(w mod h)

3.9. TEXTURING

The xoffset argument of TexSubImage1D and CopyTexSubImage1D speci-fies the left texel coordinate of a width-wide subregion of the texel array. Negative values of xoffset correspond to the coordinates of border texels. Taking wsand bsto be the specified width and border width of the texel array, and xand wto be the xoffset and width argument values, either of the following relationships generates the error INVALIDVALUE:

x< bs

x + w>ws bs

Counting from zero, the nth pixel group is assigned to the texel with internal integer coordinates [i], where

i = x +(n mod w)

Texture images with compressed internal formats may be stored in such a way that it is not possible to modify an image with subimage commands without having to decompress and recompress the texture image. Even if the image were modi-fied in this manner, it may not be possible to preserve the contents of some of the texels outside the region being modified. To avoid these complications, the GL does not support arbitrary modifications to texture images with compressed internal formats. Calling TexSubImage3D, CopyTexSubImage3D, TexSubIm-age2D, CopyTexSubImage2D, TexSubImage1D, or CopyTexSubImage1D will result in an INVALIDOPERATIONerror if xoffset, yoffset, or zoffset is not equal to bs(border width). In addition, the contents of any texel outside the region mod-ified by such a call are undefined. These restrictions may be relaxed for specific compressed internal formats whose images are easily modified.

If the internal format of the texture image being modified is one of the specific RGTCformats described in table 3.19, the texture is stored using one of the RGTC texture image encodings (see appendix C.1). Since RGTC images are easily edited along 4 × 4texel boundaries, the limitations on subimage location and size are relaxed for TexSubImage2D, TexSubImage3D, CopyTexSubImage2D, and CopyTexSubImage3D. These commands will generate an INVALIDOPERATIONerror if one of the following conditions occurs:

  • width is not a multiple of four or equal to TEXTUREWIDTH, unless xoffset and yoffset are both zero.

    • height is not a multiple of four or equal to TEXTUREHEIGHT, unless xoffset and yoffset are both zero.

    • 3.9. TEXTURING
  • xoffset or yoffset is not a multiple of four.

The contents of any 4 × 4block of texels of an RGTC compressed texture image that does not intersect the area being modified are preserved during valid TexSubImage* and CopyTexSubImage* calls.

Calling CopyTexSubImage3D, CopyTex-Image2D, CopyTexSubImage2D, CopyTexImage1D, or CopyTexSubImage1D will result in an INVALIDFRAMEBUFFEROPERATIONerror if the object bound to READFRAMEBUFFERBINDINGis not framebuffer complete (see section 4.4.4).

3.9.3 Compressed Texture Images

Texture images may also be specified or modified using image data already stored in a known compressed image format, such as the RGTC formats defined in appendix C, or additional formats defined by GL extensions.

The commands

voidCompressedTexImage1D( enumtarget,intlevel,

enum internalformat, sizei width, int border,

sizeiimageSize,void*data );

voidCompressedTexImage2D( enumtarget,intlevel,

enum internalformat, sizei width, sizei height,

intborder,sizeiimageSize,void*data );

voidCompressedTexImage3D( enumtarget,intlevel,

enum internalformat, sizei width, sizei height,

sizeidepth,intborder,sizeiimageSize,void*data );

define one-, two-, and three-dimensional texture images, respectively, with incom-ing data stored in a specific compressed image format. The target, level, internal-format, width, height, depth, and border parameters have the same meaning as in TexImage1D, TexImage2D, and TexImage3D. data refers to compressed image data stored in the specific compressed image format corresponding to internal-format. If a pixel unpack buffer is bound (as indicated by a non-zero value of PIXELUNPACKBUFFERBINDING), data is an offset into the pixel unpack buffer and the compressed data is read from the buffer relative to this offset; otherwise, data is a pointer to client memory and the compressed data is read from client memory relative to the pointer.

internalformat must be a supported specific compressed internal format. An INVALIDENUMerror will be generated if any other values, including any of the six generic compressed internal formats, is specified.

3.9. TEXTURING

For all other compressed internal formats, the compressed image will be de-coded according to the specification defining the internalformat token. Com-pressed texture images are treated as an array of imageSize ubytes relative to data. If a pixel unpack buffer object is bound and data+imageSizeis greater than the size of the pixel buffer, an INVALIDOPERATIONerror results. All pixel storage and pixel transfer modes are ignored when decoding a compressed texture image. If the imageSize parameter is not consistent with the format, dimensions, and contents of the compressed image, an INVALIDVALUEerror results. If the compressed image is not encoded according to the defined image format, the re-sults of the call are undefined.

Specific compressed internal formats may impose format-specific restrictions on the use of the compressed image specification calls or parameters. For example, the compressed image format might be supported only for 2D textures, or might not allow non-zero border values. Any such restrictions will be documented in the extension specification defining the compressed internal format; violating these restrictions will result in an INVALIDOPERATIONerror.

Any restrictions imposed by specific compressed internal formats will be invariant, meaning that if the GL accepts and stores a texture image in compressed form, providing the same image to CompressedTexImage1D, CompressedTexImage2D, or CompressedTexImage3D will not result in an INVALIDOPERATIONerror if the following restrictions are satisfied:

  • data points to a compressed texture image returned by GetCompressedTex-Image (section 6.1.4).

  • target, level, and internalformat match the target, level and format parame-ters provided to the GetCompressedTexImage call returning data.

width, height, depth, border, internalformat, and image-Size match the values of TEXTUREWIDTH, TEXTUREHEIGHT, TEXTUREDEPTH, TEXTUREBORDER, TEXTUREINTERNALFORMAT, and TEXTURECOMPRESSEDIMAGESIZEfor image level level in effect at the time of the GetCompressedTexImage call returning data.

This guarantee applies not just to images returned by GetCompressedTexImage, but also to any other properly encoded compressed texture image of the same size and format.

If internalformat is one of the specific RGTCor formats described in table 3.19, the compressed image data is stored using one of the RGTC compressed texture im-age encodings (see appendix C.1) The RGTC texture compression algorithm sup- ports only two-dimensional images without borders. If internalformat is an RGTC

3.9. TEXTURING

format, CompressedTexImage1D will generate an INVALIDENUMerror; Com-pressedTexImage2D will generate an INVALIDOPERATIONerror if border is non-zero; and CompressedTexImage3D will generate an INVALIDOPERATIONerror if border is non-zero or target is not TEXTURE2DARRAY.

The commands

voidCompressedTexSubImage1D( enumtarget,intlevel,intxoffset,sizeiwidth,enumformat,sizeiimageSize,void*data );

voidCompressedTexSubImage2D( enumtarget,intlevel,

intxoffset,intyoffset,sizeiwidth,sizeiheight,

enumformat,sizeiimageSize,void*data );

voidCompressedTexSubImage3D( enumtarget,intlevel,

intxoffset,intyoffset,intzoffset,sizeiwidth,

sizei height, sizei depth, enum format,

sizeiimageSize,void*data );

respecify only a rectangular region of an existing texel array, with incoming data stored in a known compressed image format. The target, level, xoffset, yoffset, zoff-set, width, height, and depth parameters have the same meaning as in TexSubIm-age1D, TexSubImage2D, and TexSubImage3D. data points to compressed im-age data stored in the compressed image format corresponding to format. Since the core GL provides no specific image formats, using any of these six generic compressed internal formats as format will result in an INVALIDENUMerror.

The image pointed to by data and the imageSize parameter are interpreted as though they were provided to CompressedTexImage1D, CompressedTexIm-age2D, and CompressedTexImage3D. These commands do not provide for im-age format conversion, so an INVALIDOPERATIONerror results if format does not match the internal format of the texture image being modified. If the image-Size parameter is not consistent with the format, dimensions, and contents of the compressed image (too little or too much data), an INVALIDVALUEerror results.

As with CompressedTexImage calls, compressed internal formats may have additional restrictions on the use of the compressed image specification calls or parameters. Any such restrictions will be documented in the specification defin-ing the compressed internal format; violating these restrictions will result in an INVALIDOPERATIONerror.

Any restrictions imposed by specific compressed internal formats will be invariant, meaning that if the GL accepts and stores a texture image in com-pressed form, providing the same image to CompressedTexSubImage1D, Com-pressedTexSubImage2D, CompressedTexSubImage3D will not result in an INVALIDOPERATIONerror if the following restrictions are satisfied:

3.9. TEXTURING

  • data points to a compressed texture image returned by GetCompressedTex-Image (section 6.1.4).

  • target, level, and format match the target, level and format parameters pro-vided to the GetCompressedTexImage call returning data.

  • width, height, depth, format, and imageSize match the val-ues of TEXTUREWIDTH, TEXTUREHEIGHT, TEXTUREDEPTH, TEXTUREINTERNALFORMAT, and TEXTURECOMPRESSEDIMAGESIZEfor image level level in effect at the time of the GetCompressedTexImage call returning data.

  • width, height, depth, format match the values of TEXTUREWIDTH, TEXTUREHEIGHT, TEXTUREDEPTH, and TEXTUREINTERNALFORMATcurrently in effect for image level level.

  • xoffset, yoffset, and zoffset are all b, where bis the value of TEXTUREBORDERcurrently in effect for image level level.

This guarantee applies not just to images returned by GetCompressedTexIm-age, but also to any other properly encoded compressed texture image of the same size.

Calling CompressedTexSubImage3D, CompressedTexSubImage2D, or CompressedTexSubImage1D will result in an INVALIDOPERATIONerror if xoff-set, yoffset, or zoffset is not equal to bs(border width), or if width, height, and depth do not match the values of TEXTUREWIDTH, TEXTUREHEIGHT, or TEXTUREDEPTH, respectively. The contents of any texel outside the region modi-fied by the call are undefined. These restrictions may be relaxed for specific com-pressed internal formats whose images are easily modified.

If internalformat is one of the specific RGTCor formats described in ta-ble 3.19, the texture is stored using one of the RGTC compressed texture image encodings (see appendix C.1). If internalformat is an RGTC format, Com-pressedTexSubImage1D will generate an INVALIDENUMerror; Compressed-TexSubImage2D will generate an INVALIDOPERATIONerror if border is non-zero; and CompressedTexSubImage3D will generate an INVALIDOPERATIONerror if border is non-zero or target is not TEXTURE2DARRAY. Since RGTC im-ages are easily edited along 4× 4texel boundaries, the limitations on subimage location and size are relaxed for CompressedTexSubImage2D and Compressed-TexSubImage3D. These commands will result in an INVALIDOPERATIONerror if one of the following conditions occurs:

    • width is not a multiple of four or equal to TEXTUREWIDTH.

    • 3.9. TEXTURING
  • height is not a multiple of four or equal to TEXTUREHEIGHT.

  • xoffset or yoffset is not a multiple of four.

The contents of any 4 × 4block of texels of an RGTC compressed texture image that does not intersect the area being modified are preserved during valid TexSubImage* and CopyTexSubImage* calls.

3.9.4 Texture Parameters

Various parameters control how the texel array is treated when specified or changed, and when applied to a fragment. Each parameter is set by calling

void TexParameter{if}( enumtarget,enumpname,Tparam );voidTexParameter{if}v( enumtarget,enumpname,T*params );voidTexParameterI{i ui}v( enumtarget,enumpname,T*params );

target is the target, either TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY. or TEXTURECUBEMAP. pname is a symbolic constant indi-cating the parameter to be set; the possible constants and corresponding parameters are summarized in table 3.20. In the first form of the command, param is a value to which to set a single-valued parameter; in the remaining forms, params is an array of parameters whose type depends on the parameter being set.

If the value for TEXTUREPRIORITYis specified as an integer, the conversion for signed integers from table 2.10 is applied to convert this value to floating-point, followed by clamping the value to lie in [0,1].

If the values for TEXTUREBORDERCOLORare specified with TexParame-terIiv or TexParameterIuiv, the values are unmodified and stored with an internal data type of integer. If specified with TexParameteriv, the conversion for signed integers from table 2.10 is applied to convert these values to floating-point. Other-wise the values are unmodified and stored as floating-point.

In the remainder of section 3.9, denote by lodmin, lodmax, levelbase, and levelmaxthe values of the texture parameters TEXTUREMINLOD, TEXTUREMAXLOD, TEXTUREBASELEVEL, and TEXTUREMAXLEVELrespec-tively.

Texture parameters for a cube map texture apply to the cube map as a whole; the six distinct two-dimensional texture images use the texture parameters of the cube map itself.

3.9. TEXTURING

Name Type Legal Values
TEXTURE WRAP S enum CLAMP, CLAMPTOEDGE, REPEAT, CLAMPTOBORDER, MIRROREDREPEAT
TEXTURE WRAP T enum CLAMP, CLAMPTOEDGE, REPEAT, CLAMPTOBORDER, MIRROREDREPEAT
TEXTURE WRAP R enum CLAMP, CLAMPTOEDGE, REPEAT, CLAMPTOBORDER, MIRROREDREPEAT
TEXTURE MIN FILTER enum NEAREST, LINEAR, NEARESTMIPMAPNEAREST, NEARESTMIPMAPLINEAR, LINEARMIPMAPNEAREST, LINEARMIPMAPLINEAR,
TEXTURE MAG FILTER enum NEAREST, LINEAR
TEXTURE BORDER COLOR 4 floats, integers, or unsigned integers any 4 values
TEXTURE PRIORITY float any value in [0,1]
TEXTURE MIN LOD float any value
TEXTURE MAX LOD float any value
TEXTURE BASE LEVEL integer any non-negative integer
TEXTURE MAX LEVEL integer any non-negative integer
TEXTURE LOD BIAS float any value
DEPTH TEXTURE MODE enum RED, LUMINANCE, INTENSITY, ALPHA
TEXTURE COMPARE MODE enum NONE, COMPAREREFTOTEXTURE
TEXTURE COMPARE FUNC enum LEQUAL, GEQUALLESS, GREATER, EQUAL, NOTEQUAL, ALWAYS, NEVER
GENERATE MIPMAP boolean TRUEor FALSE

Table 3.20: Texture parameters and their values.

3.9. TEXTURING

Major Axis Direction Target sc tc ma
+rxTEXTURE CUBE MAP POSITIVE X rzryrx
rxTEXTURE CUBE MAP NEGATIVE X rz ryrx
+ryTEXTURE CUBE MAP POSITIVE Y rx rz ry
ryTEXTURE CUBE MAP NEGATIVE Y rx rzry
+rzTEXTURE CUBE MAP POSITIVE Z rx ryrz
rzTEXTURE CUBE MAP NEGATIVE Z rxryrz

Table 3.21: Selection of cube map images based on major axis direction of texture coordinates.

If the value of texture parameter GENERATEMIPMAPis TRUE, specifying or changing texel arrays may have side effects, which are discussed in the Automatic Mipmap Generation discussion of section 3.9.7.

3.9.5 Depth Component Textures

Depth textures and the depth components of depth/stencil textures can be treated as RED, LUMINANCE, INTENSITYor ALPHAtextures during texture filtering and application (see section 3.9.14). The initial state for depth and depth/stencil tex- tures treats them as LUMINANCEtextures except in a forward-compatible context, where the initial state instead treats them as REDtextures.

3.9.6 Cube Map Texture Selection

When cube map texturing is enabled, the strtexture coordinates are treated

as a direction vector rxryrzemanating from the center of a cube (the qcoordinate can be ignored, since it merely scales the vector without affecting the direction.) At texture application time, the interpolated per-fragment direction vec-tor selects one of the cube map face’s two-dimensional images based on the largest magnitude coordinate direction (the major axis direction). If two or more coor-dinates have the identical magnitude, the implementation may define the rule to disambiguate this situation. The rule must be deterministic and depend only on

rx ry rz . The target column in table 3.21 explains how the major axis direc-tion maps to the two-dimensional image of a particular cube map target. Using the sc, tc, and madetermined by the major axis direction as specified in table 3.21, an updated st is calculated as follows:

3.9. TEXTURING

1

sc

+1

s =

2

|ma|

1

t =

2

tc

+1

|ma|

This new stis used to find a texture value in the determined face’s two-dimensional texture image using the rules given in sections 3.9.7 through 3.9.8.

3.9.7 Texture Minification

Applying a texture to a primitive implies a mapping from texture image space to framebuffer image space. In general, this mapping involves a reconstruction of the sampled texture image, followed by a homogeneous warping implied by the mapping to framebuffer space, then a filtering, followed finally by a resampling of the filtered, warped, reconstructed image before applying it to a fragment. In the GL this mapping is approximated by one of two simple filtering schemes. One of these schemes is selected based on whether the mapping from texture space to framebuffer space is deemed to magnify or minify the texture image.

Scale Factor and Level of Detail

The choice is governed by a scale factor ρ(x,y)and the level-of-detail parameter λ(x,y), defined as

λbase(x,y)=log2[ρ(x,y)](3.16)

λ(x,y)=λbase(x,y)+clamp(biastexobj+biastexunit+biasshader)(3.17)

λ =

⎧ ⎪⎪⎨

lodmax>lodmax

λ, lodmin λlodmax (3.18)

λ<lodmin

lodmin,

undefined,lodmin>lodmax

biastexobjis the value of TEXTURELODBIASfor the bound texture object (as de-scribed in section 3.9.4). biastexunitis the value of TEXTURELODBIASfor the current texture unit (as described in section 3.9.13). biasshaderis the value of the optional bias parameter in the texture lookup functions available to fragment shaders. If the texture access is performed in a fragment shader without a provided

3.9. TEXTURING

bias, or outside a fragment shader, then biasshaderis zero. The sum of these values is clamped to the range [biasmax,biasmax]where biasmaxis the value of the implementation defined constant MAXTEXTURELODBIAS.

If λ(x,y)is less than or equal to the constant c(see section 3.9.8) the texture is said to be magnified; if it is greater, the texture is minified. Sampling of minified textures is described in the remainder of this section, while sampling of magnified textures is described in section 3.9.8.

The initial values of lodminand lodmaxare chosen so as to never clamp the normal range of λ. They may be respecified for a specific texture by calling Tex-Parameter[if] with pnameset to TEXTUREMINLODor TEXTUREMAXLODre-spectively.

Let s(x,y)be the function that associates an stexture coordinate with each set of window coordinates (x,y)that lie within a primitive; define t(x,y)and r(x,y)analogously. Let

u(x,y)=wt× s(x,y)+δu

v(x,y)=ht× t(x,y)+δv(3.19)

w(x,y)=dt× r(x,y)+δw

where wt, ht, and dtare as defined by equation 3.15 with ws, hs, and dsequal to the width, height, and depth of the image array whose level is levelbase. For a one-dimensional or one-dimensional array texture, define v(x,y)0and w(x,y)0; for a two-dimensional, two-dimensional array, or cube map texture, define w(x,y)0.

(δuvw)are the texel offsets specified in the OpenGL Shading Lan-guage texture lookup functions that support offsets. If the texture function used does not support offsets, or for fixed-function texture accesses, all three shader offsets are taken to be zero. If any of the offset values are outside the range of the implementation-defined values MINPROGRAMTEXELOFFSETand MAXPROGRAMTEXELOFFSET, results of the texture lookup are undefined.

For a polygon, ρis given at a fragment with window coordinates (x,y)by

⎧��⎫

� �2 ��2 ��2 ��2 ��2 ��2

∂u ∂v ∂w ∂u ∂v ∂w

ρ =max ++ , ++

⎩ ∂x ∂x ∂x ∂y ∂y ∂y

(3.20) where ∂u/∂xindicates the derivative of uwith respect to window x, and similarly for the other derivatives.

3.9. TEXTURING

For a line, the formula is

�2 �2 �2

∂u ∂u

∂v ∂v

∂w ∂w

Δx +

Δy

Δx +

Δy

Δx +

Δy

+

+

ρ =

l,

∂x ∂y

∂x ∂y

∂x ∂y

(3.21)

where Δx=x2x1and Δysegment’s window coordinate endpoints and l=

y2 y1with (x1,y1)and (x2,y2)being the

2. For a point, pixel

=

Δx2 y

rectangle, or bitmap, ρ 1.

While it is generally agreed that equations 3.20 and 3.21 give the best results when texturing, they are often impractical to implement. Therefore, an imple-mentation may approximate the ideal ρwith a function f(x,y)subject to these conditions:

  1. f(x,y)is continuous and monotonically increasing in each of |∂u/∂x|, |∂u/∂y|, |∂v/∂x|, |∂v/∂y|, |∂w/∂x|, and |∂w/∂y|

  2. Let �

mu = max

∂u

∂x

∂u

∂y

,

∂v

∂x

∂v

∂y

mv = max

,

∂w

∂x

∂w

∂y

mw = max

,

.

Then max{mu,mv,mw}≤ f(x,y)mu+mv+mw.

Coordinate Wrapping and Texel Selection

After generating u(x,y), v(x,y), and w(x,y), they may be clamped and wrapped before sampling the texture, depending on the corresponding texture wrap modes. Let

clamp(u(x,y),0,wt),TEXTUREWRAPSis CLAMP

u(x,y)=

u(x,y),otherwise

clamp(v(x,y),0,ht),TEXTUREWRAPTis CLAMP

v(x,y)=

v(x,y),otherwise

clamp(w(x,y),0,ht),TEXTUREWRAPRis CLAMP

w(x,y)=

3.9. TEXTURING

where clamp(a,b,c)returns bif a<b, cif a>c, and aotherwise.

The value assigned to TEXTUREMINFILTERis used to determine how the texture value for a fragment is selected.

When the value of TEXTUREMINFILTERis NEAREST, the texel in the image array of level levelbasethat is nearest (in Manhattan distance) to (u,v,w)is obtained. Let (i,j,k)be integers such that

i=wrap(u(x,y))j=wrap(v(x,y))k=wrap(w(x,y))

and the value returned by wrap()is defined in table 3.22. For a three-dimensional texture, the texel at location (i,j,k)becomes the texture value. For two-dimensional, two-dimensional array, or cube map textures, kis irrelevant, and the texel at location (i,j)becomes the texture value. For one-dimensional texture or one-dimensional array textures, jand kare irrelevant, and the texel at location ibecomes the texture value.

For one-and two-dimensional array textures, the texel is obtained from image layer l, where

clamp(t+0.5,0,ht1), for one-dimensional array textures

l =

clamp(r+0.5,0,dt1), for two-dimensional array textures

Wrap mode Result of wrap(coord)�
CLAMP clamp(coord,0,size1),for NEARESTfiltering clamp(coord,1,size),for LINEARfiltering
CLAMP TO EDGE clamp(coord,0,size1)
CLAMP TO BORDER clamp(coord,1,size)
REPEAT fmod(coord,size)
MIRRORED REPEAT (size1)mirror(fmod(coord,2×size)size)

Table 3.22: Texel location wrap mode application. fmod(a,b)returns ab ×

a

b . mirror(a)returns aif a0, and (1+a)otherwise. The values of modeand sizeare TEXTUREWRAPSand wt, TEXTUREWRAPTand ht, and TEXTUREWRAPRand dtwhen wrapping i, j, or kcoordinates, respectively.

3.9. TEXTURING

If the selected (i,j,k), (i,j), or ilocation refers to a border texel that satisfies any of the conditions

i< bs i wt + bs j< bs j ht + bs k< bs k dt + bs

then the border values defined by TEXTUREBORDERCOLORare used in place of the non-existent texel. If the texture contains color components, the values of TEXTUREBORDERCOLORare interpreted as an RGBA color to match the texture’s internal format in a manner consistent with table 3.15. The internal data type of the border values must be consistent with the type returned by the texture as described in section 3.9, or the result is undefined. The border values for texture components stored as fixed-point values are clamped to [0,1]before they are used. If the tex-ture contains depth components, the first component of TEXTUREBORDERCOLORis interpreted as a depth value.

When the value of TEXTUREMINFILTERis LINEAR,a 2×2×2cube of texels in the image array of level levelbaseis selected. Let

i0=wrap(u� 0.5)j0=wrap(v� 0.5)k0=wrap(w� 0.5)i1=wrap(u� 0.5+1)j1=wrap(v� 0.5+1)k1=wrap(w� 0.5+ 1)

alpha=frac(u0.5)beta=frac(v0.5)gamma=frac(w0.5)

where frac(x)denotes the fractional part of x. For a three-dimensional texture, the texture value τis found as

τ = (1 α)(1β)(1γ)τi0j0k0 +α(1β)(1γ)τi1j0k0

+
(1 α)β(1γ)τi0j1k0 +αβ(1γ)τi1j1k0
+
(1 α)(1β)γτi0j0k1 +α(1β)γτi1j0k1 (3.22)
+
(1 α)βγτi0j1k1 +αβγτi1j1k1

3.9. TEXTURING

where τijkis the texel at location (i,j,k)in the three-dimensional texture image. For a two-dimensional, two-dimensional array, or cube map textures,

τ =(1 α)(1β)τi0j0 +α(1β)τi1j0

+ (1 α)βτi0j1 +αβτi1j1

where τijis the texel at location (i,j)in the two-dimensional texture image. For two-dimensional array textures, all texels are obtained from layer l, where

l=clamp(r+0.5,0,dt1).

And for a one-dimensional or one-dimensional array texture,

τ = (1 α)τi0 +ατi1

where τiis the texel at location iin the one-dimensional texture. For one-dimensional array textures, both texels are obtained from layer l, where

l=clamp(t+0.5,0,ht1).

For any texel in the equation above that refers to a border texel outside the defined range of the image, the texel value is taken from the texture border color as with NEARESTfiltering.

If all of the following conditions are satisfied, then the value of the selected τijk, τij, or τiin the above equations is undefined instead of referring to the value of the texel at location (i,j,k), (i,j), or (i)respectively. See chapter 4 for discussion of framebuffer objects and their attachments.

  • The current DRAWFRAMEBUFFERBINDINGnames a framebuffer object F.

  • The texture is attached to one of the attachment points, A, of framebuffer object F.

The value of TEXTUREMINFILTERis NEARESTor LINEAR, and the value of FRAMEBUFFERATTACHMENTTEXTURELEVELfor attachment point A is equal to the value of TEXTUREBASELEVEL

-or-

The value of TEXTUREMINFILTERis NEARESTMIPMAPNEAREST, NEARESTMIPMAPLINEAR, LINEARMIPMAPNEAREST, or LINEARMIPMAPLINEAR, and the value of FRAMEBUFFERATTACHMENTTEXTURELEVELfor attachment point A is within the the inclusive range from TEXTUREBASELEVELto q.

3.9. TEXTURING

Mipmapping

TEXTUREMINFILTERvalues NEARESTMIPMAPNEAREST, NEARESTMIPMAPLINEAR, LINEARMIPMAPNEAREST, and LINEARMIPMAPLINEAReach require the use of a mipmap. A mipmap is an ordered set of arrays representing the same image; each array has a resolution lower than the previous one. If the image array of level levelbase, excluding its border, has dimensions wt× ht × dt, then there are log2(maxsize)+1 levels in the mipmap. where

maxsize =

⎧ ⎪⎨ ⎪⎩

wt,for 1D and 1D array textures

max(wt,ht),for 2D, 2D array, and cube map textures

max(wt,ht,dt),for 3D textures

Numbering the levels such that level levelbaseis the 0th level, the ith array has dimensions

max(1, wt ) × max(1, ht ) × max(1, dt ) wd hd dd

where

wd =2i

1, for 1D and 1D array textures

hd =

2i , otherwise

2i , for 3D textures

dd =

1, otherwise

until the last array is reached with dimension 1 × 1 × 1.

Each array in a mipmap is defined using TexImage3D, TexImage2D, Copy-TexImage2D, TexImage1D, or CopyTexImage1D; the array being set is indicated with the level-of-detail argument level. Level-of-detail numbers proceed from levelbasefor the original texel array through p=log2(maxsize)+levelbasewith each unit increase indicating an array of half the dimensions of the previous one (rounded down to the next integer if fractional) as already described. All ar-rays from levelbasethrough q=min{p,levelmax} must be defined, as discussed in section 3.9.10.

The values of levelbaseand levelmaxmay be respecified for a specific tex-ture by calling TexParameter[if] with pnameset to TEXTUREBASELEVELor TEXTUREMAXLEVELrespectively.

3.9. TEXTURING

The error INVALIDVALUEis generated if either value is negative.

The mipmap is used in conjunction with the level of detail to approximate the application of an appropriately filtered texture to a fragment. Let cbe the value of λat which the transition from minification to magnification occurs (since this discussion pertains to minification, we are concerned only with values of λwhere λ>c).

For mipmap filters NEARESTMIPMAPNEARESTand LINEARMIPMAPNEAREST, the dth mipmap array is selected, where

d =

⎧ ⎪⎨ ⎪⎩

levelbase,λ21
levelbase+λ+12 �− 1, λ> 1 ,levelbase+λq + 1 (3.23)

2 2

q, λ> 12 ,levelbase+λ>q+12

The rules for NEARESTor LINEARfiltering are then applied to the selected array. Specifically, the coordinate (u,v,w)is computed as in equation 3.19, with ws, hs, and dsequal to the width, height, and depth of the image array whose level is d.

For mipmap filters NEARESTMIPMAPLINEARand LINEARMIPMAPLINEAR, the level d1and d2mipmap arrays are selected, where

d1 = q,levelbase+λq (3.24)otherwise

levelbase+λ,

d2 = q,levelbase+λq (3.25)d1 +1, otherwise

The rules for NEARESTor LINEARfiltering are then applied to each of the selected arrays, yielding two corresponding texture values τ1and τ2. Specifically, for level d1, the coordinate (u,v,w)is computed as in equation 3.19, with ws, hs, and dsequal to the width, height, and depth of the image array whose level is d1. For level d2the coordinate (u,v,w)is computed as in equation 3.19, with ws, hs, and dsequal to the width, height, and depth of the image array whose level is d2.

The final texture value is then found as

τ = [1 frac(λ)]τ1+frac(λ)τ2.

3.9. TEXTURING

Automatic Mipmap Generation

If the value of texture parameter GENERATEMIPMAPis TRUE, and a change is made to the interior or border texels of the levelbasearray of a mipmap by one of the texture image specification operations defined in sections 3.9.1 through 3.9.3, then a 3 complete set of mipmap arrays (as defined in section 3.9.10) will be computed. Array levels levelbase+1through pare replaced with arrays derived from the modified levelbasearray, regardless of their previous contents. All other mipmap arrays, including the levelbasearray, are left unchanged by this computation.

The internal formats and border widths of the derived mipmap arrays all match those of the levelbasearray, and the dimensions of the derived arrays follow the requirements described in section 3.9.10.

The contents of the derived arrays are computed by repeated, filtered reduction of the levelbasearray. For one-and two-dimensional array textures, each layer is filtered independently. No particular filter algorithm is required, though a box filter is recommended as the default filter. In some implementations, filter quality may be affected by hints (section 5.6).

Automatic mipmap generation is available only for non-proxy texture image targets.

Manual Mipmap Generation

Mipmaps can be generated manually with the command

voidGenerateMipmap( enumtarget );

where target is one of TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAP. Mipmap generation affects the texture image attached to target. For cube map textures, an INVALIDOPERATIONerror is generated if the texture bound to target is not cube complete, as defined in section 3.9.10.

Mipmap generation replaces texel array levels levelbase+1through qwith ar-rays derived from the levelbasearray, as described above for Automatic Mipmap Generation. All other mipmap arrays, including the levelbasearray, are left un-changed by this computation. For arrays in the range levelbase+1through q, inclusive, automatic and manual mipmap generation generate the same derived ar-rays, given identical levelbasearrays.

3Automatic mipmap generation is not performed for changes resulting from rendering operations targeting a texel array bound as a color buffer of a framebuffer object.

3.9. TEXTURING 212

3.9.8 Texture Magnification

When λindicates magnification, the value assigned to TEXTUREMAGFILTERdetermines how the texture value is obtained. There are two possible values for TEXTUREMAGFILTER: NEARESTand LINEAR. NEARESTbehaves exactly as NEARESTfor TEXTUREMINFILTERand LINEARbehaves exactly as LINEARfor TEXTUREMINFILTERas described in section 3.9.7, including the texture coordi- nate wrap modes specified in table 3.22. The level-of-detail levelbasetexel array is always used for magnification.

Finally, there is the choice of c, the minification vs. magnification switch-over point. If the magnification filter is given by LINEARand the minification filter is given by NEARESTMIPMAPNEARESTor NEARESTMIPMAPLINEAR, then c=0.5. This is done to ensure that a minified texture does not appear “sharper” than a magnified texture. Otherwise c=0.

3.9.9 Combined Depth/Stencil Textures

If the texture image has a base internal format of DEPTHSTENCIL, then the stencil index texture component is ignored. The texture value τdoes not include a stencil index component, but includes only the depth component.

3.9.10 Texture Completeness

A texture is said to be complete if all the image arrays and texture parameters required to utilize the texture for texture application are consistently defined. The definition of completeness varies depending on the texture dimensionality.

For one-, two-, or three-dimensional textures and one-or two-dimensional array textures, a texture is complete if the following conditions all hold true:

  • The set of mipmap arrays levelbasethrough q(where qis defined in the Mipmapping discussion of section 3.9.7) were each specified with the same internal format.

  • The border widths of each array are the same.

  • The dimensions of the arrays follow the sequence described in the Mipmap-ping discussion of section 3.9.7.

  • levelbaselevelmax

    • Each dimension of the levelbasearray is positive.

    • 3.9. TEXTURING
  • If the internal format of the arrays is integer (see tables 3.16- 3.17, TEXTUREMAGFILTERmust be NEARESTand TEXTUREMINFILTERmust be NEARESTor NEARESTMIPMAPNEAREST.

Array levels kwhere k<levelbaseor k>qare insignificant to the definition of completeness.

For cube map textures, a texture is cube complete if the following conditions all hold true:

  • The levelbasearrays of each of the six texture images making up the cube map have identical, positive, and square dimensions.

  • The levelbasearrays were each specified with the same internal format.

  • The levelbasearrays each have the same border width.

Finally, a cube map texture is mipmap cube complete if, in addition to being cube complete, each of the six texture images considered individually is complete.

Effects of Completeness on Texture Application

If one-, two-, or three-dimensional texturing (but not cube map textur-ing) is enabled for a texture unit at the time a primitive is rasterized, if TEXTUREMINFILTERis one that requires a mipmap, and if the texture image bound to the enabled texture target is not complete, then it is as if texture mapping were disabled for that texture unit.

If cube map texturing is enabled for a texture unit at the time a primitive is rasterized, and if the bound cube map texture is not cube complete, then it is as if texture mapping were disabled for that texture unit. Additionally, if TEXTUREMINFILTERis one that requires a mipmap, and if the texture is not mipmap cube complete, then it is as if texture mapping were disabled for that tex-ture unit.

Effects of Completeness on Texture Image Specification

An implementation may allow a texture image array of level 1 or greater to be cre-ated only if a mipmap complete set of image arrays consistent with the requested array can be supported. A mipmap complete set of arrays is equivalent to a com-plete set of arrays where levelbase=0and levelmax=1000, and where, excluding borders, the dimensions of the image array being created are understood to be half the corresponding dimensions of the next lower numbered array (rounded down to the next integer if fractional).

3.9. TEXTURING

3.9.11 Texture State and Proxy State

The state necessary for texture can be divided into two categories. First, there are the nine sets of mipmap arrays (one each for the one-, two-, and three-dimensional texture targets and six for the cube map texture targets) and their number. Each array has associated with it a width, height (two-and three-dimensional and cube map only), and depth (three-dimensional only), a border width, an integer describ-ing the internal format of the image, eight integer values describing the resolu-tions of each of the red, green, blue, alpha, luminance, intensity, depth, and stencil components of the image, eight integer values describing the type (un-signed normalized, integer, floating-point, etc.) of each of the components, a boolean describing whether the image is compressed or not, and an integer size of a compressed image. Each initial texel array is null (zero width, height, and depth, zero border width, internal format 1, with the compressed flag set to FALSE, a zero compressed size, and zero-sized components). Next, there are the four sets of texture properties, corresponding to the one-, two-, three-dimensional, and cube map texture targets. Each set consists of the selected minification and magnification filters, the wrap modes for s, t(two-and three-dimensional and cube map only), and r(three-dimensional only), the TEXTUREBORDERCOLOR, two floating-point numbers describing the minimum and maximum level of de-tail, two integers describing the base and maximum mipmap array, a boolean flag indicating whether the texture is resident, a boolean indicating whether au-tomatic mipmap generation should be performed, three integers describing the depth texture mode, compare mode, and compare function, and the priority as-sociated with each set of properties. The value of the resident flag is deter-mined by the GL and may change as a result of other GL operations. The flag may only be queried, not set, by applications (see section 3.9.12). In the initial state, the value assigned to TEXTUREMINFILTERis NEARESTMIPMAPLINEAR, and the value for TEXTUREMAGFILTERis LINEAR. s, t, and rwrap modes are all set to REPEAT. The values of TEXTUREMINLODand TEXTUREMAXLODare -1000 and 1000 respectively. The values of TEXTUREBASELEVELand TEXTUREMAXLEVELare 0 and 1000 respectively. TEXTUREPRIORITYis 1.0, and TEXTUREBORDERCOLORis (0,0,0,0). The value of GENERATEMIPMAPis false. The values of DEPTHTEXTUREMODE, TEXTURECOMPAREMODE, and TEXTURECOMPAREFUNCare LUMINANCE, NONE, and LEQUALrespectively. The initial value of TEXTURERESIDENTis determined by the GL.

In addition to image arrays for one-, two-, and three-dimensional textures, one-and two-dimensional array textures, and the six image arrays for the cube map tex-ture, partially instantiated image arrays are maintained for one-, two-, and three-dimensional textures and one-and two-dimensional array textures. Additionally,

3.9. TEXTURING

a single proxy image array is maintained for the cube map texture. Each proxy image array includes width, height, depth, border width, and internal format state values, as well as state for the red, green, blue, alpha, luminance, intensity, depth, and stencil component resolutions and types Proxy arrays do not include image data nor texture parameters. When TexImage3D is executed with target specified as PROXYTEXTURE3D, the three-dimensional proxy state values of the specified level-of-detail are recomputed and updated. If the image array would not be sup-ported by TexImage3D called with target set to TEXTURE3D, no error is gener-ated, but the proxy width, height, depth, border width, and component resolutions are set to zero, and the component types are set to NONE. If the image array would be supported by such a call to TexImage3D, the proxy state values are set exactly as though the actual image array were being specified. No pixel data are transferred or processed in either case.

Proxy arrays for one-and two-dimensional textures and one-and two-dimensional array textures are operated on in the same way when TexImage1D is executed with target specified as PROXYTEXTURE1D, TexImage2D is executed with target specified as PROXYTEXTURE2Dor PROXYTEXTURE1DARRAY, or TexImage3D is executed with target specified as PROXYTEXTURE2DARRAY.

The cube map proxy arrays are operated on in the same manner when TexIm-age2D is executed with the target field specified as PROXYTEXTURECUBEMAP, with the addition that determining that a given cube map texture is supported with PROXYTEXTURECUBEMAPindicates that all six of the cube map 2D images are supported. Likewise, if the specified PROXYTEXTURECUBEMAPis not supported, none of the six cube map 2D images are supported.

There is no image associated with any of the proxy textures. There-fore PROXYTEXTURE1D, PROXYTEXTURE2D, and PROXYTEXTURE3D, and PROXYTEXTURECUBEMAPcannot be used as textures, and their images must never be queried using GetTexImage. The error INVALIDENUMis generated if this is attempted. Likewise, there is no non level-related state associated with a proxy texture, and GetTexParameteriv or GetTexParameterfv may not be called with a proxy texture target. The error INVALIDENUMis generated if this is at-tempted.

3.9.12 Texture Objects

In addition to the default textures TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, and TEXTURECUBEMAP, named one-, two-, and three-dimensional, one-and two-dimensional array, and cube map tex-ture objects can be created and operated upon. The name space for texture objects is the unsigned integers, with zero reserved by the GL.

3.9. TEXTURING

A texture object is created by binding an unused name to TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAP. The binding is effected by calling

voidBindTexture( enumtarget,uinttexture );

with target set to the desired texture target and texture set to the unused name. The resulting texture object is a new state vector, comprising all the state val-ues listed in section 3.9.11, set to the same initial values. If the new texture ob- ject is bound to TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAP, it is and remains a one-, two-, three-dimensional, one-or two-dimensional array, or cube map texture respectively until it is deleted.

BindTexture may also be used to bind an existing texture object to either TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAP. The error INVALIDOPERATIONis generated if an attempt is made to bind a texture object of different dimensionality than the specified target. If the bind is successful no change is made to the state of the bound texture object, and any previous binding to target is broken.

While a texture object is bound, GL operations on the target to which it is bound affect the bound object, and queries of the target to which it is bound return state from the bound object. If texture mapping of the dimensionality of the target to which a texture object is bound is enabled, the state of the bound texture object directs the texturing operation.

In the initial state, TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, and TEXTURECUBEMAPhave one-, two-, three-dimensional, one-and two-dimensional array, and cube map tex-ture state vectors respectively associated with them. In order that access to these initial textures not be lost, they are treated as texture objects all of whose names are 0. The initial one-, two-, three-dimensional, one-and two-dimensional rray, and cube map texture is therefore operated upon, queried, and applied as TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAPrespectively while 0 is bound to the corresponding targets.

Texture objects are deleted by calling

voidDeleteTextures( sizein,uint*textures );

textures contains n names of texture objects to be deleted. After a texture object is deleted, it has no contents or dimensionality, and its name is again unused. If

3.9. TEXTURING

a texture that is currently bound to one of the targets TEXTURE1D, TEXTURE2D, TEXTURE3D, TEXTURE1DARRAY, TEXTURE2DARRAY, or TEXTURECUBEMAPis deleted, it is as though BindTexture had been executed with the same target and texture zero. Additionally, special care must be taken when deleting a texture if any of the images of the texture are attached to a framebuffer object. See sec-tion 4.4.2 for details.

Unused names in textures are silently ignored, as is the value zero.

The command

voidGenTextures( sizein,uint*textures );

returns n previously unused texture object names in textures. These names are marked as used, for the purposes of GenTextures only, but they acquire texture state and a dimensionality only when they are first bound, just as if they were unused.

An implementation may choose to establish a working set of texture objects on which binding operations are performed with higher performance. A texture object that is currently part of the working set is said to be resident. The command

booleanAreTexturesResident( sizein,uint*textures,boolean*residences );

returns TRUEif all of the n texture objects named in textures are resident, or if the implementation does not distinguish a working set. If at least one of the texture objects named in textures is not resident, then FALSEis returned, and the residence of each texture object is returned in residences. Otherwise the contents of resi-dences are not changed. If any of the names in textures are unused or are zero, FALSEis returned, the error INVALIDVALUEis generated, and the contents of res-idences are indeterminate. The residence status of a single bound texture object can also be queried by calling GetTexParameteriv or GetTexParameterfv with targetset to the target to which the texture object is bound, and pnameset to TEXTURERESIDENT.

AreTexturesResident indicates only whether a texture object is currently resi-dent, not whether it could not be made resident. An implementation may choose to make a texture object resident only on first use, for example. The client may guide the GL implementation in determining which texture objects should be resident by specifying a priority for each texture object. The command

voidPrioritizeTextures( sizein,uint*textures,clampf*priorities );

3.9. TEXTURING

sets the priorities of the n texture objects named in textures to the values in priori-ties. Each priority value is clamped to the range [0,1] before it is assigned. Zero in-dicates the lowest priority, with the least likelihood of being resident. One indicates the highest priority, with the greatest likelihood of being resident. The priority of a single bound texture object may also be changed by calling TexParameteri, Tex-Parameterf, TexParameteriv, or TexParameterfv with targetset to the target to which the texture object is bound, pnameset to TEXTUREPRIORITY, and paramor paramsspecifying the new priority value (which is clamped to the range [0,1] before being assigned). PrioritizeTextures silently ignores attempts to prioritize unused texture object names or zero (default textures).

The texture object name space, including the initial one-, two-, and three-dimensional, one-and two-dimensional array, and cube map texture objects, is shared among all texture units. A texture object may be bound to more than one texture unit simultaneously. After a texture object is bound, any GL operations on that target object affect any other texture units to which the same texture object is bound.

Texture binding is affected by the setting of the state ACTIVETEXTURE.

If a texture object is deleted, it as if all texture units which are bound to that texture object are rebound to texture object zero.

3.9.13 Texture Environments and Texture Functions

The command

void TexEnv{if}( enumtarget,enumpname,Tparam );

void TexEnv{if}v( enumtarget,enumpname,Tparams );

sets parameters of the texture environment that specifies how texture values are interpreted when texturing a fragment, or sets per-texture-unit filtering parameters.

target must be one of POINTSPRITE, TEXTUREENVor TEXTUREFILTERCONTROL. pname is a symbolic constant indicating the parameter to be set. In the first form of the command, param is a value to which to set a single-valued parameter; in the second form, params is a pointer to an array of parameters: either a single symbolic constant or a value or group of values to which the parameter should be set.

When target is POINTSPRITE, point sprite rasterization behavior is affected as described in section 3.4.

When target is TEXTUREFILTERCONTROL, pname must be TEXTURELODBIAS. In this case the parameter is a single signed floating point value, biastexunit, that biases the level of detail parameter λas described in section 3.9.7.

3.9. TEXTURING

When target is TEXTUREENV, the possible environment parameters are TEXTUREENVMODE, TEXTUREENVCOLOR, COMBINERGB, COMBINEALPHA, RGBSCALE, ALPHASCALE, SRCnRGB, SRCnALPHA, OPERANDnRGB, and OPERANDnALPHA, where n= 0, 1, or 2. TEXTUREENVMODEmay be set to one of REPLACE, MODULATE, DECAL, BLEND, ADD, or COMBINE. TEXTUREENVCOLORis set to an RGBA color by providing four single-precision floating-point values. If integers are provided for TEXTUREENVCOLOR, then they are converted to floating-point as specified in table 2.10 for signed integers.

The value of TEXTUREENVMODEspecifies a texture function. The result of this function depends on the fragment and the texel array value. The precise form of the function depends on the base internal formats of the texel arrays that were last specified.

Cfand Af 4 are the primary color components of the incoming fragment; Csand Asare the components of the texture source color, derived from the filtered texture values Rt, Gt, Bt, At, Lt, and Itas shown in table 3.23; Ccand Acare the components of the texture environment color; Cpand Apare the components resulting from the previous texture environment (for texture environment 0, Cpand Apare identical to Cfand Af, respectively); and Cvand Avare the primary color components computed by the texture function.

If fragment color clamping is enabled, all of these color values, including the results, are clamped to the range [0,1]. If fragment color clamping is disabled, the values are not clamped. The texture functions are specified in tables 3.24, 3.25, and 3.26.

If the value of TEXTUREENVMODEis COMBINE, the form of the texture func-tion depends on the values of COMBINERGBand COMBINEALPHA, according to table 3.26. The RGBand ALPHAresults of the texture function are then multiplied by the values of RGBSCALEand ALPHASCALE, respectively. If fragment color clamping is enabled, the arguments and results used in table 3.26 are clamped to [0,1]. Otherwise, the results are unmodified.

The arguments Arg0, Arg1, and Arg2are determined by the values of SRCnRGB, SRCnALPHA, OPERANDnRGBand OPERANDnALPHA, where n= 0, 1, or 2, as shown in tables 3.27 and 3.28. Csn and Asn denote the texture source color and alpha from the texture image bound to texture unit n

The state required for the current texture environment, for each texture unit, consists of a six-valued integer indicating the texture function, an eight-valued in-teger indicating the RGBcombiner function and a six-valued integer indicating the

4In the remainder of section 3.9.13, the notation Cxis used to denote each of the three components Rx, Gx, and Bxof a color specified by x. Operations on Cxare performed independently for each color component. The Acomponent of colors is usually operated on in a different fashion, and is therefore denoted separately by Ax.

3.9. TEXTURING

Texture Base Internal Format Texture source color CsAs
ALPHA (0, 0, 0) At
LUMINANCE (Lt,Lt,Lt)1
LUMINANCE ALPHA (Lt,Lt,Lt)At
INTENSITY (It,It,It)It
RED (Rt,0,0)1
RG (Rt,Gt,0)1
RGB (Rt,Gt,Bt)1
RGBA (Rt,Gt,Bt)At

Table 3.23: Correspondence of filtered texture components to texture source components.

Texture Base Internal Format REPLACE Function MODULATE Function DECAL Function
ALPHA Cv = Cp Av = As Cv=CpAv=ApAsundefined
LUMINANCE (or 1) Cv = Cs Av = Ap Cv=CpCsAv=Apundefined
LUMINANCE ALPHA (or 2) Cv = Cs Av = As Cv=CpCsAv=ApAsundefined
INTENSITY Cv = Cs Av = As Cv=CpCsAv=ApAsundefined
RGB, RG, RED, or 3 Cv = Cs Av = Ap Cv=CpCsAv=ApCv = Cs Av = Ap
RGBA or 4 Cv = Cs Av = As Cv=CpCsAv=ApAsCv=Cp(1As)+CsAsAv=Ap

Table 3.24: Texture functions REPLACE, MODULATE, and DECAL.

3.9. TEXTURING

Texture Base Internal Format BLEND Function ADD Function
ALPHA Cv=CpAv=ApAsCv=CpAv=ApAs
LUMINANCE (or 1) Cv=Cp(1Cs)+CcCsAv=ApCv = Cp + Cs Av = Ap
LUMINANCE ALPHA (or 2) Cv=Cp(1Cs)+CcCsAv=ApAsCv=Cp+CsAv=ApAs
INTENSITY Cv=Cp(1Cs)+CcCsAv=Ap(1As)+AcAsCv = Cp + Cs Av = Ap + As
RGB, RG, RED, or 3 Cv=Cp(1Cs)+CcCsAv=ApCv = Cp + Cs Av = Ap
RGBA or 4 Cv=Cp(1Cs)+CcCsAv=ApAsCv=Cp+CsAv=ApAs

Table 3.25: Texture functions BLENDand ADD.

ALPHAcombiner function, six four-valued integers indicating the combiner RGBand ALPHAsource arguments, three four-valued integers indicating the combiner RGBoperands, three two-valued integers indicating the combiner ALPHAoperands, and four floating-point environment color values. In the initial state, the texture and combiner functions are each MODULATE, the combiner RGBand ALPHAsources are each TEXTURE, PREVIOUS, and CONSTANTfor sources 0, 1, and 2 respectively, the combiner RGBoperands for sources 0 and 1 are each SRCCOLOR, the combiner RGBoperand for source 2, as well as for the combiner ALPHAoperands, are each SRCALPHA, and the environment color is (0,0,0,0).

The state required for the texture filtering parameters, for each texture unit, consists of a single floating-point level of detail bias. The initial value of the bias is 0.0.

3.9.14 Texture Comparison Modes

Texture values can also be computed according to a specified comparison func-tion. Texture parameter TEXTURECOMPAREMODEspecifies the comparison operands, and parameter TEXTURECOMPAREFUNCspecifies the comparison func-tion. The format of the resulting texture sample is determined by the value of DEPTHTEXTUREMODE.

3.9. TEXTURING

COMBINE RGB Texture Function
REPLACE Arg0
MODULATE Arg0Arg1
ADD Arg0+Arg1
ADD SIGNED Arg0+Arg10.5
INTERPOLATE Arg0Arg2+Arg1(1Arg2)
SUBTRACT Arg0Arg1
DOT3 RGB 4×((Arg0r0.5)(Arg1r0.5)+(Arg0g0.5)(Arg1g0.5)+(Arg0b0.5)(Arg1b0.5))
DOT3 RGBA 4×((Arg0r0.5)(Arg1r0.5)+(Arg0g0.5)(Arg1g0.5)+(Arg0b0.5)(Arg1b0.5))
COMBINE ALPHA Texture Function
REPLACE Arg0
MODULATE Arg0Arg1
ADD Arg0+Arg1
ADD SIGNED Arg0+Arg10.5
INTERPOLATE Arg0Arg2+Arg1(1Arg2)
SUBTRACT Arg0Arg1

Table 3.26: COMBINEtexture functions. The scalar expression computed for the DOT3RGBand DOT3RGBAfunctions is placed into each of the 3 (RGB) or 4 (RGBA) components of the output. The result generated from COMBINEALPHAis ignored for DOT3RGBA.

3.9. TEXTURING

SRCn RGB OPERANDn RGB Argument
TEXTURE SRC COLOR ONE MINUS SRC COLOR SRC ALPHA ONE MINUS SRC ALPHA Cs 1 − Cs As 1 − As
TEXTUREn SRC COLOR ONE MINUS SRC COLOR SRC ALPHA ONE MINUS SRC ALPHA Cs n 1 − Cs n As n 1 − As n
CONSTANT SRC COLOR ONE MINUS SRC COLOR SRC ALPHA ONE MINUS SRC ALPHA Cc 1 − Cc Ac 1 − Ac
PRIMARY COLOR SRC COLOR ONE MINUS SRC COLOR SRC ALPHA ONE MINUS SRC ALPHA Cf 1 − Cf Af 1 − Af
PREVIOUS SRC COLOR ONE MINUS SRC COLOR SRC ALPHA ONE MINUS SRC ALPHA Cp 1 − Cp Ap 1 − Ap

Table 3.27: Arguments for COMBINERGBfunctions.

SRCn ALPHA OPERANDn ALPHA Argument
TEXTURE SRC ALPHA ONE MINUS SRC ALPHA As 1 − As
TEXTUREn SRC ALPHA ONE MINUS SRC ALPHA As n 1 − As n
CONSTANT SRC ALPHA ONE MINUS SRC ALPHA Ac 1 − Ac
PRIMARY COLOR SRC ALPHA ONE MINUS SRC ALPHA Af 1 − Af
PREVIOUS SRC ALPHA ONE MINUS SRC ALPHA Ap 1 − Ap

Table 3.28: Arguments for COMBINEALPHAfunctions.

3.9. TEXTURING

Depth Texture Comparison Mode

If the currently bound texture’s base internal format is DEPTHCOMPONENTor DEPTHSTENCIL, then TEXTURECOMPAREMODE, TEXTURECOMPAREFUNCand DEPTHTEXTUREMODEcontrol the output of the texture unit as described below. Otherwise, the texture unit operates in the normal manner and texture comparison is bypassed.

Let Dtbe the depth texture value and Drefbe the reference value, defined as follows:

  • For fixed-function, non-cubemap texture lookups, Drefis the interpolated rtexture coordinate.

  • For fixed-function, cubemap texture lookups, Drefis the interpolated qtex-ture coordinate.

  • For texture lookups generated by an OpenGL Shading Language lookup function, Drefis the reference value for depth comparisons provided by the lookup function.

If the texture’s internal format indicates a fixed-point depth texture, then Dtand Drefare clamped to the range [0,1]; otherwise no clamping is performed. Then the effective texture value is computed as follows:

If the value of TEXTURECOMPAREMODEis NONE, then

r = Dt

If the value of TEXTURECOMPAREMODEis COMPAREREFTOTEXTURE, then rdepends on the texture comparison function as shown in table 3.29.

The resulting ris assigned to Rt, Lt, It, or Atif the value of DEPTHTEXTUREMODEis respectively RED, LUMINANCE, INTENSITY, or ALPHA.

If the value of TEXTUREMAGFILTERis not NEAREST, or the value of TEXTUREMINFILTERis not NEARESTor NEARESTMIPMAPNEAREST, then rmay be computed by comparing more than one depth texture value to the texture reference value. The details of this are implementation-dependent, but rshould be a value in the range [0,1]which is proportional to the number of comparison passes or failures.

3.9.15 sRGB Texture Color Conversion

If the currently bound texture’s internal format is one of SRGB, SRGB8, SRGBALPHA, SRGB8ALPHA8, SLUMINANCEALPHA, SLUMINANCE8ALPHA8,

3.9. TEXTURING

Texture Comparison Function Computed r� esult r
LEQUAL r=1.0,0.0,DrefDtDref>Dt
GEQUAL r=1.0,0.0,DrefDtDref<Dt
LESS r=1.0,0.0,Dref<DtDrefDt
GREATER r=1.0,0.0,Dref>DtDrefDt
EQUAL r=1.0,0.0,Dref=DtDref=Dt
NOTEQUAL r=1.0,0.0,Dref=DtDref=Dt
ALWAYS r=1.0
NEVER r=0.0

Table 3.29: Depth texture comparison functions.

SLUMINANCE, SLUMINANCE8, COMPRESSEDSRGB, COMPRESSEDSRGBALPHA, COMPRESSEDSLUMINANCE, or COMPRESSEDSLUMINANCEALPHA, the red, green, and blue components are converted from an sRGB color space to a lin-ear color space as part of filtering described in sections 3.9.7 and 3.9.8. Any alpha component is left unchanged. Ideally, implementations should perform this color conversion on each sample prior to filtering but implementations are allowed to perform this conversion after filtering (though this post-filtering approach is infe-rior to converting from sRGB prior to filtering).

The conversion from an sRGB encoded component, cs, to a linear component, cl, is as follows.

cl = 12c.s 92 , 2.4cs 0.04045(3.26)

cs+0.055,cs>0.04045

1.055

Assume csis the sRGB component in the range [0,1].

3.9. TEXTURING

3.9.16 Shared Exponent Texture Color Conversion

If the currently bound texture’s internal format is RGB9E5, the red, green, blue, and shared bits are converted to color components (prior to filtering) using shared exponent decoding. The component reds, greens, blues, and expsharedvalues (see section 3.9.1) are treated as unsigned integers and are converted to red, green, and blueas follows:

red=reds2expsharedB

green=greens2expsharedB

blue=blues2expsharedB

3.9.17 Texture Application

Texturing is enabled or disabled using the generic Enable and Disable com-mands, respectively, with the symbolic constants TEXTURE1D, TEXTURE2D, TEXTURE3D, or TEXTURECUBEMAPto enable the one-, two, three-dimensional, or cube map texture, respectively. If both two-and one-dimensional textures are enabled, the two-dimensional texture is used. If the three-dimensional and either of the two-or one-dimensional textures is enabled, the three-dimensional texture is used. If the cube map texture and any of the three-, two-, or one-dimensional textures is enabled, then cube map texturing is used.

If all texturing is disabled, a rasterized fragment is passed on unaltered to the next stage of the GL (although its texture coordinates may be discarded). Other-wise, a texture value is found according to the parameter values of the currently bound texture image of the appropriate dimensionality using the rules given in sec-tions 3.9.6 through 3.9.8. This texture value is used along with the incoming frag- ment in computing the texture function indicated by the currently bound texture environment. The result of this function replaces the incoming fragment’s primary R, G, B, and A values. These are the color values passed to subsequent operations. Other data associated with the incoming fragment remain unchanged, except that the texture coordinates may be discarded.

Note that the texture value may contain R, G, B, A, L, I, or Dcomponents, but it does not contain an Scomponent. If the texture’s base internal format is DEPTHSTENCIL, for the purposes of texture application it is as if the base internal format were DEPTHCOMPONENT.

Each texture unit is enabled and bound to texture objects independently from the other texture units. Each texture unit follows the precedence rules for one-, two-, three-dimensional, and cube map textures. Thus texture units can be performing

3.10. COLOR SUM

texture mapping of different dimensionalities simultaneously. Each unit has its own enable and binding states.

Each texture unit is paired with an environment function, as shown in figure 3.11. The second texture function is computed using the texture value from the second texture, the fragment resulting from the first texture function computa-tion and the second texture unit’s environment function. If there is a third texture, the fragment resulting from the second texture function is combined with the third texture value using the third texture unit’s environment function and so on. The tex-ture unit selected by ActiveTexture determines which texture unit’s environment is modified by TexEnv calls.

If the value of TEXTUREENVMODEis COMBINE, the texture function associated with a given texture unit is computed using the values specified by SRCnRGB, SRCnALPHA, OPERANDnRGBand OPERANDnALPHA. If TEXTUREnis specified as SRCnRGBor SRCnALPHA, the texture value from texture unit n will be used in computing the texture function for this texture unit.

Texturing is enabled and disabled individually for each texture unit. If texturing is disabled for one of the units, then the fragment resulting from the previous unit is passed unaltered to the following unit. Individual texture units beyond those specified by MAXTEXTUREUNITSare always treated as disabled.

If a texture unit is disabled or has an invalid or incomplete texture (as defined in section 3.9.10) bound to it, then blending is disabled for that texture unit. If the texture environment for a given enabled texture unit references a disabled texture unit, or an invalid or incomplete texture that is bound to another unit, then the results of texture blending are undefined.

The required state, per texture unit, is four bits indicating whether each of one-, two-, three-dimensional, or cube map texturing is enabled or disabled. In the intial state, all texturing is disabled for all texture units.

3.10 Color Sum

At the beginning of color sum, a fragment has two RGBA colors: a primary color cpri(which texturing, if enabled, may have modified) and a secondary color csec.

If color sum is enabled, the R, G, and B components of these two colors are summed to produce a single post-texturing RGBA color c. The A component of cis taken from the A component of cpri; the A component of csecis unused. If color sum is disabled, then cpriis assigned to c. If fragment color clamping is enabled, the components of care then clamped to the range [0,1].

Color sum is enabled or disabled using the generic Enable and Disable com-mands, respectively, with the symbolic constant COLORSUM. If lighting is enabled

3.10. COLOR SUM

3.11. FOG 229

and if a vertex shader is not active, the color sum stage is always applied, ignoring the value of COLORSUM.

The state required is a single bit indicating whether color sum is enabled or disabled. In the initial state, color sum is disabled.

Color sum has no effect in color index mode, or if a fragment shader is active.

3.11 Fog

If enabled, fog blends a fog color with a rasterized fragment’s post-texturing color using a blending factor f. Fog is enabled and disabled with the Enable and Disable

commands using the symbolic constant FOG.
This factor fis computed according to one of three equations:
f=exp(d·c),(3.27)
f=exp((d·c)2),or (3.28)
f = e − c e − s (3.29)

If a vertex shader is active, or if the fog source, as defined below, is FOGCOORD, then cis the interpolated value of the fog coordinate for this fragment. Otherwise, if the fog source is FRAGMENTDEPTH, then cis the eye-coordinate distance from the eye, (0,0,0,1)in eye coordinates, to the fragment center. The equation and the fog source, along with either dor eand s, is specified with

void Fog{if}( enumpname,Tparam );

void Fog{if}v( enumpname,Tparams );

If pname is FOGMODE, then param must be, or params must point to an inte-ger that is one of the symbolic constants EXP, EXP2, or LINEAR, in which case equation 3.27, 3.28, or 3.29, respectively, is selected for the fog calculation (if, when 3.29 is selected, e=s, results are undefined). If pname is FOGCOORDSRC, then param must be, or params must point to an integer that is one of the sym-bolic constants FRAGMENTDEPTHor FOGCOORD. If pname is FOGDENSITY, FOGSTART, or FOGEND, then param is or params points to a value that is d, s, or e, respectively. If dis specified less than zero, the error INVALIDVALUEre-sults.

An implementation may choose to approximate the eye-coordinate distance from the eye to each fragment center by |ze|. Further, fneed not be computed at

3.12. FRAGMENT SHADERS

each fragment, but may be computed at each vertex and interpolated as other data are.

No matter which equation and approximation is used to compute f, the result is clamped to [0,1]to obtain the final f.

fis used differently depending on whether the GL is in RGBA or color index mode. In RGBA mode, if Crrepresents a rasterized fragment’s R, G, or B value, then the corresponding value produced by fog is

C = fCr + (1 f)Cf.

(The rasterized fragment’s A value is not changed by fog blending.) The R, G, B, and A values of Cfare specified by calling Fog with pname equal to FOGCOLOR; in this case params points to four values comprising Cf. If these are not floating-point values, then they are converted to floating-point using the conversion given in table 2.10 for signed integers. If fragment color clamping is enabled, the components of Crand Cfand the result Care clamped to the range [0,1]before the fog blend is performed.

In color index mode, the formula for fog blending is

I = ir + (1 f)if

where iris the rasterized fragment’s color index and ifis a single-precision floating-point value. (1f)ifis rounded to the nearest fixed-point value with the same number of bits to the right of the binary point as ir, and the integer por-tion of Iis masked (bitwise ANDed) with 2n 1, where nis the number of bits in a color in the color index buffer (buffers are discussed in chapter 4). The value of ifis set by calling Fog with pname set to FOGINDEXand param being or params pointing to a single value for the fog index. The integer part of ifis masked with

2n 1.

The state required for fog consists of a three valued integer to select the fog equation, three floating-point values d, e, and s, an RGBA fog color and a fog color index, a two-valued integer to select the fog coordinate source, and a single bit to indicate whether or not fog is enabled. In the initial state, fog is disabled, FOGCOORDSRCis FRAGMENTDEPTH, FOGMODEis EXP, d=1.0, e=1.0, and s=0.0; Cf=(0,0,0,0)and if=0.

Fog has no effect if a fragment shader is active.

3.12 Fragment Shaders

The sequence of operations that are applied to fragments that result from raster-izing a point, line segment, polygon, pixel rectangle or bitmap as described in

3.12. FRAGMENT SHADERS

sections 3.9 through 3.11 is a fixed functionality method for processing such frag-ments. Applications can more generally describe the operations that occur on such fragments by using a fragment shader.

A fragment shader is an array of strings containing source code for the opera-tions that are meant to occur on each fragment that results from rasterizing a point, line segment, polygon, pixel rectangle or bitmap. The language used for fragment shaders is described in the OpenGL Shading Language Specification.

A fragment shader only applies when the GL is in RGBA mode. Its operation in color index mode is undefined.

Fragment shaders are created as described in section 2.20.1 using a type pa-rameter of FRAGMENTSHADER. They are attached to and used in program objects as described in section 2.20.2.

When the program object currently in use includes a fragment shader, its frag-ment shader is considered active, and is used to process fragments. If the program object has no fragment shader, or no program object is currently in use, the fixed-function fragment processing operations described in previous sections are used.

Results of rasterization are undefined if any of the selected draw buffers of the draw framebuffer have an integer format and no fragment shader is active.

3.12.1 Shader Variables

Fragment shaders can access uniforms belonging to the current shader object. The amount of storage available for fragment shader uniform variables is specified by the implementation dependent constant MAXFRAGMENTUNIFORMCOMPONENTS. This value represents the number of individual floating-point, integer, or boolean values that can be held in uniform variable storage for a fragment shader. A uniform matrix will consume no more than 4× min(r,c)such values, where rand care the number of rows and columns in the matrix. A link error will be generated if an attempt is made to utilize more than the space available for fragment shader uniform variables.

Fragment shaders can read varying variables that correspond to the attributes of the fragments produced by rasterization. The OpenGL Shading Language Spec-ification defines a set o